title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
5.3. Understanding Domain Transitions: sepolicy transition
5.3. Understanding Domain Transitions: sepolicy transition Previously, the setrans utility was used to examine if transition between two domain or process types is possible and printed out all intermediary types that are used to transition between these domains or processes. In Red Hat Enterprise Linux 7, setrans is provided as part of the sepolicy suite and the sepolicy transition command is now used instead. The sepolicy transition command queries a SELinux policy and creates a process transition report. The sepolicy transition command requires two command-line arguments - a source domain (specified by the -s option) and a target domain (specified by the -t option). If only the source domain is entered, sepolicy transition lists all possible domains that the source domain can transition to. The following output does not contain all entries. The " @ " character means " execute " : If the target domain is specified, sepolicy transition examines SELinux policy for all transition paths from the source domain to the target domain and lists these paths. The output below is not complete: See the sepolicy-transition (8) manual page for further information about sepolicy transition .
[ "~]USD sepolicy transition -s httpd_t httpd_t @ httpd_suexec_exec_t --> httpd_suexec_t httpd_t @ mailman_cgi_exec_t --> mailman_cgi_t httpd_t @ abrt_retrace_worker_exec_t --> abrt_retrace_worker_t httpd_t @ dirsrvadmin_unconfined_script_exec_t --> dirsrvadmin_unconfined_script_t httpd_t @ httpd_unconfined_script_exec_t --> httpd_unconfined_script_t", "~]USD sepolicy transition -s httpd_t -t system_mail_t httpd_t @ exim_exec_t --> system_mail_t httpd_t @ courier_exec_t --> system_mail_t httpd_t @ sendmail_exec_t --> system_mail_t httpd_t ... httpd_suexec_t @ sendmail_exec_t --> system_mail_t httpd_t ... httpd_suexec_t @ exim_exec_t --> system_mail_t httpd_t ... httpd_suexec_t @ courier_exec_t --> system_mail_t httpd_t ... httpd_suexec_t ... httpd_mojomojo_script_t @ sendmail_exec_t --> system_mail_t" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/security-enhanced_linux-the-sepolicy-suite-sepolicy_transition
Chapter 26. Upgrading MySQL
Chapter 26. Upgrading MySQL Red Hat is committed to fully supporting the upstream version of MySQL, which is currently included in Red Hat Enterprise Linux, until the end of Maintenance Support 2 Phase, as long as upstream security and bug fixes are available. For overview of Red Hat Enterprise Linux Life Cycle, see https://access.redhat.com/support/policy/updates/errata#Maintenance_Support_2_Phase . More recent versions of MySQL, MySQL 5.6 and MySQL 5.7, are provided as the rh-mysql56 and rh-mysql57 Software Collections. These components are part of Red Hat Software Collections, available for all supported releases of Red Hat Enterprise Linux 6 on AMD64 and Intel 64 architectures. For information on how to get access to Red Hat Software Collections, see the Red Hat Software Collections Release Notes . See the Red Hat Software Collections Product Life Cycle document for information regarding length of support for individual components. Note that you cannot directly migrate from MySQL 5.1 to the currently supported versions. Refer to detailed procedures how to migrate from MySQL 5.1 to MySQL 5.5 , from MySQL 5.5 to MySQL 5.6 , and from MySQL 5.6 to MySQL 5.7 .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-upgrading_mysql
Chapter 18. CIDR range definitions
Chapter 18. CIDR range definitions You must specify non-overlapping ranges for the following CIDR ranges. Note Machine CIDR ranges cannot be changed after creating your cluster. Important OVN-Kubernetes, the default network provider in OpenShift Container Platform 4.11 and later, uses the 100.64.0.0/16 IP address range internally. If your cluster uses OVN-Kubernetes, do not include the 100.64.0.0/16 IP address range in any other CIDR definitions in your cluster. 18.1. Machine CIDR In the Machine CIDR field, you must specify the IP address range for machines or cluster nodes. The default is 10.0.0.0/16 . This range must not conflict with any connected networks. 18.2. Service CIDR In the Service CIDR field, you must specify the IP address range for services. The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is 172.30.0.0/16 . 18.3. Pod CIDR In the pod CIDR field, you must specify the IP address range for pods. The pod CIDR is the same as the clusterNetwork CIDR and the cluster CIDR. The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is 10.128.0.0/14 . You can expand the range after cluster installation. Additional resources Cluster Network Operator Configuration 18.4. Host Prefix In the Host Prefix field, you must specify the subnet prefix length assigned to pods scheduled to individual machines. The host prefix determines the pod IP address pool for each machine. For example, if the host prefix is set to /23 , each machine is assigned a /23 subnet from the pod CIDR address range. The default is /23 , allowing 510 cluster nodes, and 510 pod IP addresses per node.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/cidr-range-definitions
Chapter 1. Getting started using the RHEL web console
Chapter 1. Getting started using the RHEL web console The following sections aim to help you install the web console in Red Hat Enterprise Linux 7 and open the web console in your browser. You will also learn how to add remote hosts and monitor them in the the web console. 1.1. Prerequisites Installed Red Hat Enterprise Linux 7.5 or newer. Enabled networking. Registered system with appropriate subscription attached. To obtain subscription, see link: Managing subscriptions in the web console . 1.2. What is the RHEL web console The RHEL web console is a Red Hat Enterprise Linux 7 web-based interface designed for managing and monitoring your local system, as well as Linux servers located in your network environment. The RHEL web console enables you a wide range of administration tasks, including: Managing services Managing user accounts Managing and monitoring system services Configuring network interfaces and firewall Reviewing system logs Managing virtual machines Creating diagnostic reports Setting kernel dump configuration Configuring SELinux Updating software Managing system subscriptions The RHEL web console uses the same system APIs as you would in a terminal, and actions performed in a terminal are immediately reflected in the RHEL web console. You can monitor the logs of systems in the network environment, as well as their performance, displayed as graphs. In addition, you can change the settings directly in the web console or through the terminal. 1.3. Installing the web console Red Hat Enterprise Linux 7 includes the RHEL web console installed by default in many installation variants. If this is not the case on your system, install the Cockpit package and set up the cockpit.socket service to enable the RHEL web console. Procedure Install the cockpit package: Optionally, enable and start the cockpit.socket service, which runs a web server. This step is necessary, if you need to connect to the system through the web console. To verify the installation and configuration, you can open the web console . If you are using a custom firewall profile, you need to add the cockpit service to firewalld to open port 9090 in the firewall: Additional resources For installing the RHEL web console on a different Linux distribution, see Running Cockpit 1.4. Logging in to the web console The following describes the first login to the RHEL web console using a system user name and password. Prerequisites Use one of the following browsers for opening the web console: Mozilla Firefox 52 and later Google Chrome 57 and later Microsoft Edge 16 and later System user account credentials The RHEL web console uses a specific PAM stack located at /etc/pam.d/cockpit . Authentication with PAM allows you to log in with the user name and password of any local account on the system. Procedure Open the web console in your web browser: Locally: https://localhost:9090 Remotely with the server's hostname: https://example.com:9090 Remotely with the server's IP address: https://192.0.2.2:9090 If you use a self-signed certificate, the browser issues a warning. Check the certificate and accept the security exception to proceed with the login. The console loads a certificate from the /etc/cockpit/ws-certs.d directory and uses the last file with a .cert extension in alphabetical order. To avoid having to grant security exceptions, install a certificate signed by a certificate authority (CA). In the login screen, enter your system user name and password. Optionally, click the Reuse my password for privileged tasks option. If the user account you are using to log in has sudo privileges, this makes it possible to perform privileged tasks in the web console, such as installing software or configuring SELinux. Click Log In . After successful authentication, the RHEL web console interface opens. Additional resources To learn about SSL certificates, see Overview of Certificates and Security of the RHEL System Administrator's Guide.
[ "sudo yum install cockpit", "sudo systemctl enable --now cockpit.socket", "sudo firewall-cmd --add-service=cockpit --permanent firewall-cmd --reload" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/managing_systems_using_the_rhel_7_web_console/getting-started-with-the-rhel-web-console_system-management-using-the-rhel-7-web-console
1.8.3.2. Direct Routing
1.8.3.2. Direct Routing Direct routing provides increased performance benefits compared to NAT routing. Direct routing allows the real servers to process and route packets directly to a requesting user rather than passing outgoing packets through the LVS router. Direct routing reduces the possibility of network performance issues by relegating the job of the LVS router to processing incoming packets only. Figure 1.24. LVS Implemented with Direct Routing In a typical direct-routing LVS configuration, an LVS router receives incoming server requests through a virtual IP (VIP) and uses a scheduling algorithm to route the request to real servers. Each real server processes requests and sends responses directly to clients, bypassing the LVS routers. Direct routing allows for scalability in that real servers can be added without the added burden on the LVS router to route outgoing packets from the real server to the client, which can become a bottleneck under heavy network load. While there are many advantages to using direct routing in LVS, there are limitations. The most common issue with direct routing and LVS is with Address Resolution Protocol ( ARP ). In typical situations, a client on the Internet sends a request to an IP address. Network routers typically send requests to their destination by relating IP addresses to a machine's MAC address with ARP. ARP requests are broadcast to all connected machines on a network, and the machine with the correct IP/MAC address combination receives the packet. The IP/MAC associations are stored in an ARP cache, which is cleared periodically (usually every 15 minutes) and refilled with IP/MAC associations. The issue with ARP requests in a direct-routing LVS configuration is that because a client request to an IP address must be associated with a MAC address for the request to be handled, the virtual IP address of the LVS router must also be associated to a MAC. However, because both the LVS router and the real servers have the same VIP, the ARP request is broadcast to all the nodes associated with the VIP. This can cause several problems, such as the VIP being associated directly to one of the real servers and processing requests directly, bypassing the LVS router completely and defeating the purpose of the LVS configuration. Using an LVS router with a powerful CPU that can respond quickly to client requests does not necessarily remedy this issue. If the LVS router is under heavy load, it may respond to the ARP request more slowly than an underutilized real server, which responds more quickly and is assigned the VIP in the ARP cache of the requesting client. To solve this issue, the incoming requests should only associate the VIP to the LVS router, which will properly process the requests and send them to the real server pool. This can be done by using the arptables packet-filtering tool.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s3-lvs-directrouting-cso
Appendix B. Using Red Hat Enterprise Linux packages
Appendix B. Using Red Hat Enterprise Linux packages This section describes how to use software delivered as RPM packages for Red Hat Enterprise Linux. To ensure the RPM packages for this product are available, you must first register your system . B.1. Overview A component such as a library or server often has multiple packages associated with it. You do not have to install them all. You can install only the ones you need. The primary package typically has the simplest name, without additional qualifiers. This package provides all the required interfaces for using the component at program run time. Packages with names ending in -devel contain headers for C and C++ libraries. These are required at compile time to build programs that depend on this package. Packages with names ending in -docs contain documentation and example programs for the component. For more information about using RPM packages, see one of the following resources: Red Hat Enterprise Linux 7 - Installing and managing software Red Hat Enterprise Linux 8 - Managing software packages B.2. Searching for packages To search for packages, use the yum search command. The search results include package names, which you can use as the value for <package> in the other commands listed in this section. USD yum search <keyword>... B.3. Installing packages To install packages, use the yum install command. USD sudo yum install <package>... B.4. Querying package information To list the packages installed in your system, use the rpm -qa command. USD rpm -qa To get information about a particular package, use the rpm -qi command. USD rpm -qi <package> To list all the files associated with a package, use the rpm -ql command. USD rpm -ql <package>
[ "yum search <keyword>", "sudo yum install <package>", "rpm -qa", "rpm -qi <package>", "rpm -ql <package>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_cpp_client/using_red_hat_enterprise_linux_packages
3.10. Migrating from ext4 to XFS
3.10. Migrating from ext4 to XFS Starting with Red Hat Enterprise Linux 7.0, XFS is the default file system instead of ext4. This section highlights the differences when using or administering an XFS file system. The ext4 file system is still fully supported in Red Hat Enterprise Linux 7 and can be selected at installation. While it is possible to migrate from ext4 to XFS, it is not required. 3.10.1. Differences Between Ext3/4 and XFS File system repair Ext3/4 runs e2fsck in userspace at boot time to recover the journal as needed. XFS, by comparison, performs journal recovery in kernelspace at mount time. An fsck.xfs shell script is provided but does not perform any useful action as it is only there to satisfy initscript requirements. When an XFS file system repair or check is requested, use the xfs_repair command. Use the -n option for a read-only check. The xfs_repair command will not operate on a file system with a dirty log. To repair such a file system mount and unmount must first be performed to replay the log. If the log is corrupt and cannot be replayed, the -L option can be used to zero out in the log. For more information on file system repair of XFS file systems, see Section 12.2.2, "XFS" Metadata error behavior The ext3/4 file system has configurable behavior when metadata errors are encountered, with the default being to simply continue. When XFS encounters a metadata error that is not recoverable it will shut down the file system and return a EFSCORRUPTED error. The system logs will contain details of the error encountered and will recommend running xfs_repair if necessary. Quotas XFS quotas are not a remountable option. The -o quota option must be specified on the initial mount for quotas to be in effect. While the standard tools in the quota package can perform basic quota administrative tasks (tools such as setquota and repquota ), the xfs_quota tool can be used for XFS-specific features, such as Project Quota administration. The quotacheck command has no effect on an XFS file system. The first time quota accounting is turned on XFS does an automatic quotacheck internally. Because XFS quota metadata is a first-class, journaled metadata object, the quota system will always be consistent until quotas are manually turned off. File system resize The XFS file system has no utility to shrink a file system. XFS file systems can be grown online via the xfs_growfs command. Inode numbers For file systems larger than 1 TB with 256-byte inodes, or larger than 2 TB with 512-byte inodes, XFS inode numbers might exceed 2^32. Such large inode numbers cause 32-bit stat calls to fail with the EOVERFLOW return value. The described problem might occur when using the default Red Hat Enterprise Linux 7 configuration: non-striped with four allocation groups. A custom configuration, for example file system extension or changing XFS file system parameters, might lead to a different behavior. Applications usually handle such larger inode numbers correctly. If needed, mount the XFS file system with the -o inode32 parameter to enforce inode numbers below 2^32. Note that using inode32 does not affect inodes that are already allocated with 64-bit numbers. Important Do not use the inode32 option unless it is required by a specific environment. The inode32 option changes allocation behavior. As a consequence, the ENOSPC error might occur if no space is available to allocate inodes in the lower disk blocks. Speculative preallocation XFS uses speculative preallocation to allocate blocks past EOF as files are written. This avoids file fragmentation due to concurrent streaming write workloads on NFS servers. By default, this preallocation increases with the size of the file and will be apparent in "du" output. If a file with speculative preallocation is not dirtied for five minutes the preallocation will be discarded. If the inode is cycled out of cache before that time, then the preallocation will be discarded when the inode is reclaimed. If premature ENOSPC problems are seen due to speculative preallocation, a fixed preallocation amount may be specified with the -o allocsize= amount mount option. Fragmentation-related tools Fragmentation is rarely a significant issue on XFS file systems due to heuristics and behaviors, such as delayed allocation and speculative preallocation. However, tools exist for measuring file system fragmentation as well as defragmenting file systems. Their use is not encouraged. The xfs_db frag command attempts to distill all file system allocations into a single fragmentation number, expressed as a percentage. The output of the command requires significant expertise to understand its meaning. For example, a fragmentation factor of 75% means only an average of 4 extents per file. For this reason the output of xfs_db's frag is not considered useful and more careful analysis of any fragmentation problems is recommended. Warning The xfs_fsr command may be used to defragment individual files, or all files on a file system. The later is especially not recommended as it may destroy locality of files and may fragment free space. Commands Used with ext3 and ext4 Compared to XFS The following table compares common commands used with ext3 and ext4 to their XFS-specific counterparts. Table 3.1. Common Commands for ext3 and ext4 Compared to XFS Task ext3/4 XFS Create a file system mkfs.ext4 or mkfs.ext3 mkfs.xfs File system check e2fsck xfs_repair Resizing a file system resize2fs xfs_growfs Save an image of a file system e2image xfs_metadump and xfs_mdrestore Label or tune a file system tune2fs xfs_admin Backup a file system dump and restore xfsdump and xfsrestore The following table lists generic tools that function on XFS file systems as well, but the XFS versions have more specific functionality and as such are recommended. Table 3.2. Generic Tools for ext4 and XFS Task ext4 XFS Quota quota xfs_quota File mapping filefrag xfs_bmap More information on many the listed XFS commands is included in Chapter 3, The XFS File System . You can also consult the manual pages of the listed XFS administration tools for more information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/migrating-ext4-xfs
Chapter 2. Pools overview
Chapter 2. Pools overview Ceph clients store data in pools. When you create pools, you are creating an I/O interface for clients to store data. From the perspective of a Ceph client, that is, block device, gateway, and the rest, interacting with the Ceph storage cluster is remarkably simple: Create a cluster handle. Connect the cluster handle to the cluster. Create an I/O context for reading and writing objects and their extended attributes. Creating a cluster handle and connecting to the cluster To connect to the Ceph storage cluster, the Ceph client needs the following details: The cluster name (which Ceph by default) - not using usually because it sounds ambiguous. An initial monitor address. Ceph clients usually retrieve these parameters using the default path for the Ceph configuration file and then read it from the file, but a user might also specify the parameters on the command line too. The Ceph client also provides a user name and secret key, authentication is on by default. Then, the client contacts the Ceph monitor cluster and retrieves a recent copy of the cluster map, including its monitors, OSDs and pools. Creating a pool I/O context To read and write data, the Ceph client creates an I/O context to a specific pool in the Ceph storage cluster. If the specified user has permissions for the pool, the Ceph client can read from and write to the specified pool. Ceph's architecture enables the storage cluster to provide this remarkably simple interface to Ceph clients so that clients might select one of the sophisticated storage strategies you define simply by specifying a pool name and creating an I/O context. Storage strategies are invisible to the Ceph client in all but capacity and performance. Similarly, the complexities of Ceph clients, such as mapping objects into a block device representation or providing an S3/Swift RESTful service, are invisible to the Ceph storage cluster. A pool provides you with resilience, placement groups, CRUSH rules, and quotas. Resilience : You can set how many OSD are allowed to fail without losing data. For replicated pools, it is the desired number of copies or replicas of an object. A typical configuration stores an object and one additional copy, that is, size = 2 , but you can determine the number of copies or replicas. For erasure coded pools, it is the number of coding chunks, that is m=2 in the erasure code profile . Placement Groups : You can set the number of placement groups for the pool. A typical configuration uses approximately 50-100 placement groups per OSD to provide optimal balancing without using up too many computing resources. When setting up multiple pools, be careful to ensure you set a reasonable number of placement groups for both the pool and the cluster as a whole. CRUSH Rules : When you store data in a pool, a CRUSH rule mapped to the pool enables CRUSH to identify the rule for the placement of each object and its replicas, or chunks for erasure coded pools, in your cluster. You can create a custom CRUSH rule for your pool. Quotas : When you set quotas on a pool with ceph osd pool set-quota command, you might limit the maximum number of objects or the maximum number of bytes stored in the specified pool.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/edge_guide/pools-overview_edge
Part III. Device Drivers
Part III. Device Drivers This part provides a comprehensive listing of all device drivers which were updated in Red Hat Enterprise Linux 6.8.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/part-red_hat_enterprise_linux-6.8_technical_notes-device_drivers
4.338. virt-top
4.338. virt-top 4.338.1. RHBA-2011:1692 - virt-top bug fix and enhancement update An updated virt-top package that fixes three bugs and adds one enhancement is now available for Red Hat Enterprise Linux 6. The virt-top utility displays statistics of virtualized domains and uses many of the same keys and command line options as the top utility. Bug Fixes BZ# 730208 Prior to this update, the terminal was not properly restored if the --csv flag was given. This update modifies the code so that the terminal is now restored in the correct mode. BZ# 665817 The CSV output of virt-top contains only the headers for the first virtual machine. This update adds a processcsv.py script to virt-top so that the CSV output can now be split up into multiple files, each file containing full headers. BZ# 680031 When a libvirt error happens early during virt-top start-up, an obscure error message can be printed. With this update, the manual page contains added instructions for debugging libvirt errors that can occur during program initialization. Enhancement BZ# 680027 With this update, the domain memory information is now displayed in the CSV output mode. All virt-top users are advised to upgrade to this updated package, which fixes these bugs and adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/virt-top
Chapter 6. Uninstalling Red Hat Advanced Cluster Security for Kubernetes
Chapter 6. Uninstalling Red Hat Advanced Cluster Security for Kubernetes When you install Red Hat Advanced Cluster Security for Kubernetes, it creates: A namespace called rhacs-operator where the Operator is installed, if you chose the Operator method of installation A namespace called stackrox , or another namespace where you created the Central and SecuredCluster custom resources PodSecurityPolicy and Kubernetes role-based access control (RBAC) objects for all components Additional labels on namespaces, for use in generated network policies An application custom resource definition (CRD), if it does not exist Uninstalling Red Hat Advanced Cluster Security for Kubernetes involves deleting all of these items. 6.1. Deleting namespace You can delete the namespace that Red Hat Advanced Cluster Security for Kubernetes creates by using the OpenShift Container Platform or Kubernetes command-line interface. Procedure Delete the stackrox namespace: On OpenShift Container Platform: USD oc delete namespace stackrox On Kubernetes: USD kubectl delete namespace stackrox Note If you installed RHACS in a different namespace, use the name of that namespace in the delete command. 6.2. Deleting global resources You can delete the global resources that Red Hat Advanced Cluster Security for Kubernetes (RHACS) creates by using the OpenShift Container Platform or Kubernetes command-line interface (CLI). Procedure To delete the global resources by using the OpenShift Container Platform CLI, perform the following steps: Retrieve all the StackRox-related cluster roles, cluster role bindings, roles, role bindings, and PSPs, and then delete them by running the following command: USD oc get clusterrole,clusterrolebinding,role,rolebinding,psp -o name | grep stackrox | xargs oc delete --wait Note You might receive the error: the server doesn't have a resource type "psp" error message in RHACS 4.4 and later versions because the pod security policies (PSPs) are deprecated. The PSPs were removed from Kubernetes in version 1.25, except for clusters with older Kubernetes versions. Delete the custom security context constraints (SCCs) labeled with app.kubernetes.io/name=stackrox by running the following command: USD oc delete scc -l "app.kubernetes.io/name=stackrox" Note You might receive the No resources found error message in RHACS 4.4 and later versions because the custom SCCs with this label are no longer used in these versions. Delete the ValidatingWebhookConfiguration object named stackrox by running the following command: USD oc delete ValidatingWebhookConfiguration stackrox To delete the global resources by using the Kubernetes CLI, perform the following steps: Retrieve all the StackRox-related cluster roles, cluster role bindings, roles, role bindings, and PSPs, and then delete them by running the following command: USD kubectl get clusterrole,clusterrolebinding,role,rolebinding,psp -o name | grep stackrox | xargs kubectl delete --wait Note You might receive the error: the server doesn't have a resource type "psp" error message in RHACS 4.4 and later versions because the pod security policies (PSPs) are deprecated. The PSPs were removed from Kubernetes in version 1.25, except for clusters with older Kubernetes versions. Delete the ValidatingWebhookConfiguration object named stackrox by running the following command: USD kubectl delete ValidatingWebhookConfiguration stackrox 6.3. Deleting labels and annotations You can delete the labels and annotations that Red Hat Advanced Cluster Security for Kubernetes creates, by using the OpenShift Container Platform or Kubernetes command-line interface. Procedure Delete labels and annotations: On OpenShift Container Platform: USD for namespace in USD(oc get ns | tail -n +2 | awk '{print USD1}'); do oc label namespace USDnamespace namespace.metadata.stackrox.io/id-; oc label namespace USDnamespace namespace.metadata.stackrox.io/name-; oc annotate namespace USDnamespace modified-by.stackrox.io/namespace-label-patcher-; done On Kubernetes: USD for namespace in USD(kubectl get ns | tail -n +2 | awk '{print USD1}'); do kubectl label namespace USDnamespace namespace.metadata.stackrox.io/id-; kubectl label namespace USDnamespace namespace.metadata.stackrox.io/name-; kubectl annotate namespace USDnamespace modified-by.stackrox.io/namespace-label-patcher-; done
[ "oc delete namespace stackrox", "kubectl delete namespace stackrox", "oc get clusterrole,clusterrolebinding,role,rolebinding,psp -o name | grep stackrox | xargs oc delete --wait", "oc delete scc -l \"app.kubernetes.io/name=stackrox\"", "oc delete ValidatingWebhookConfiguration stackrox", "kubectl get clusterrole,clusterrolebinding,role,rolebinding,psp -o name | grep stackrox | xargs kubectl delete --wait", "kubectl delete ValidatingWebhookConfiguration stackrox", "for namespace in USD(oc get ns | tail -n +2 | awk '{print USD1}'); do oc label namespace USDnamespace namespace.metadata.stackrox.io/id-; oc label namespace USDnamespace namespace.metadata.stackrox.io/name-; oc annotate namespace USDnamespace modified-by.stackrox.io/namespace-label-patcher-; done", "for namespace in USD(kubectl get ns | tail -n +2 | awk '{print USD1}'); do kubectl label namespace USDnamespace namespace.metadata.stackrox.io/id-; kubectl label namespace USDnamespace namespace.metadata.stackrox.io/name-; kubectl annotate namespace USDnamespace modified-by.stackrox.io/namespace-label-patcher-; done" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/installing/uninstall-acs
14.9.2. Red Hat Documentation
14.9.2. Red Hat Documentation System Administrators Guide ; Red Hat, Inc - The Samba chapter explains how to configure a Samba server.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-resources-RH
Chapter 1. Clair for Red Hat Quay
Chapter 1. Clair for Red Hat Quay Clair v4 (Clair) is an open source application that leverages static code analyses for parsing image content and reporting vulnerabilities affecting the content. Clair is packaged with Red Hat Quay and can be used in both standalone and Operator deployments. It can be run in highly scalable configurations, where components can be scaled separately as appropriate for enterprise environments. 1.1. About Clair The content in this section highlights Clair releases, official Clair containers, and information about CVSS enrichment data. 1.1.1. Clair releases New versions of Clair are regularly released. The source code needed to build Clair is packaged as an archive and attached to each release. Clair releases can be found at Clair releases . Release artifacts also include the clairctl command line interface tool, which obtains updater data from the internet by using an open host. Clair 4.7.1 Clair 4.7.1 was released as part of Red Hat Quay 3.9.1. The following changes have been made: With this release, you can view unpatched vulnerabilities from Red Hat Enterprise Linux (RHEL) sources. If you want to view unpatched vulnerabilities, you can the set ignore_unpatched parameter to false . For example: updaters: config: rhel: ignore_unpatched: false To disable this feature, you can set ignore_unpatched to true . Clair 4.7 Clair 4.7 was released as part of Red Hat Quay 3.9, and includes support for the following features: Native support for indexing Golang modules and RubeGems in container images. Change to OSV.dev as the vulnerability database source for any programming language package managers. This includes popular sources like GitHub Security Advisories or PyPA. This allows offline capability. Use of pyup.io for Python and CRDA for Java is suspended. Clair now supports Java, Golang, Python, and Ruby dependencies. 1.1.2. Clair supported dependencies Clair supports identifying and managing the following dependencies: Java Golang Python Ruby This means that it can analyze and report on the third-party libraries and packages that a project in these languages relies on to work correctly. 1.1.3. Clair containers Official downstream Clair containers bundled with Red Hat Quay can be found on the Red Hat Ecosystem Catalog . Official upstream containers are packaged and released as a container at Quay.io/projectquay/clair . The latest tag tracks the Git development branch. Version tags are built from the corresponding release. 1.2. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMWare Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database For information about how Clair does security mapping with the different databases, see Claircore Severity Mapping . 1.2.1. Information about Open Source Vulnerability (OSV) database for Clair Open Source Vulnerability (OSV) is a vulnerability database and monitoring service that focuses on tracking and managing security vulnerabilities in open source software. OSV provides a comprehensive and up-to-date database of known security vulnerabilities in open source projects. It covers a wide range of open source software, including libraries, frameworks, and other components that are used in software development. For a full list of included ecosystems, see defined ecosystems . Clair also reports vulnerability and security information for golang , java , and ruby ecosystems through the Open Source Vulnerability (OSV) database. By leveraging OSV, developers and organizations can proactively monitor and address security vulnerabilities in open source components that they use, which helps to reduce the risk of security breaches and data compromises in projects. For more information about OSV, see the OSV website .
[ "updaters: config: rhel: ignore_unpatched: false" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-vulnerability-scanner
Chapter 9. Exposing 3scale API Management APIcast Metrics to Prometheus
Chapter 9. Exposing 3scale API Management APIcast Metrics to Prometheus Important For this release of 3scale, Prometheus installation and configuration are not supported. Optionally, you can use the community version of Prometheus to visualize metrics and alerts for APIcast-managed API services. 9.1. About Prometheus Prometheus is an open-source systems monitoring toolkit that you can use to monitor Red Hat 3scale API Management APIcast services deployed in the Red Hat OpenShift environment. If you want to monitor your services with Prometheus, your services must expose a Prometheus endpoint. This endpoint is an HTTP interface that exposes a list of metrics and the current value of the metrics. Prometheus periodically scrapes these target-defined endpoints and writes the collected data into its database. 9.1.1. Prometheus queries In the Prometheus UI, you can write queries in Prometheus Query Language ( PromQL ) to extract metric information. With PromQL, you can select and aggregate time series data in real time. For example, you can use the following query to select all the values that Prometheus has recorded within the last 5 minutes for all time series that have the metric name http_requests_total : You can further define or filter the results of a query by specifying a label (a key:value pair) for the metric. For example, you can use the following query to select all the values that Prometheus has recorded within the last 5 minutes for all time series that have the metric name http_requests_total and a job label set to integration : The result of a query can either be shown as a graph, viewed as tabular data in Prometheus's expression browser, or consumed by external systems by using the Prometheus HTTP API . Prometheus provides a graphical view of the data. For a more robust graphical dashboard to view Prometheus metrics, Grafana is a popular choice. You can also use the the PromQL language to configure alerts in the Prometheus alertmanager tool. Note Grafana is a community-supported feature. Deploying Grafana to monitor 3scale API Management products is not supported with Red Hat production service level agreements (SLAs). 9.2. APIcast integration with Prometheus APIcast integration with Prometheus is available for the following deployment options: Self-managed APIcast - both with 3scale Hosted or On-premises API manager. Embedded APIcast in 3scale On-premises. Note APIcast integration with Prometheus is not available in hosted API manager and hosted APIcast. By default, Prometheus can monitor the APIcast metrics listed in Table 9.2, "Prometheus Default Metrics for 3scale API Management APIcast" . 9.2.1. Additional options Optionally, if you have cluster admin access to the OpenShift cluster, you can extend the total_response_time_seconds , upstream_response_time_seconds , and upstream_status metrics to include service_id and service_system_name labels. To extend these metrics, set the APICAST_EXTENDED_METRICS OpenShift environment variable to true with this command: If you use the 3scale Batcher policy (described in Section 4.1.3, "3scale API Management Batcher" ), Prometheus can also monitor the metrics listed in Table 9.3, "Prometheus Metrics for 3scale API Management APIcast Batch Policy" . Note If a metric has no value, Prometheus hides the metric. For example, if nginx_error_log has no errors to report, Prometheus does not display the nginx_error_log metric. The nginx_error_log metric is only visible if it has a value. Additional resources For information about Prometheus, refer to Prometheus: Getting Started . 9.3. OpenShift environment variables for 3scale API Management APIcast To configure your Prometheus instance, you can set the OpenShift environment variable described in Table 9.1, "Prometheus Environment Variables for 3scale API Management APIcast" . Table 9.1. Prometheus Environment Variables for 3scale API Management APIcast Environment Variable Description Default APICAST_EXTENDED_METRICS A boolean value that enables additional information on Prometheus metrics. The following metrics have the service_id and service_system_name labels which provide more in-depth details about APIcast: total_response_time_seconds upstream_response_time_seconds upstream_status false Additional resources For information on setting environment variables, see the relevant OpenShift guides: OpenShift 4: Applications OpenShift 3.11: Developer Guide Red Hat 3scale API Management Supported Configurations 9.4. 3scale API Management APIcast metrics exposed to Prometheus After you set up Prometheus to monitor 3scale APIcast, by default it can monitor the metrics listed in in Table 9.2, "Prometheus Default Metrics for 3scale API Management APIcast" . The metrics listed in Table 9.3, "Prometheus Metrics for 3scale API Management APIcast Batch Policy" are available only when you use the 3scale Batcher policy . Table 9.2. Prometheus Default Metrics for 3scale API Management APIcast Metric Description Type Labels nginx_http_connections Number of HTTP connections gauge state(accepted,active,handled,reading,total,waiting,writing) nginx_error_log APIcast errors counter level(debug,info,notice,warn,error,crit,alert,emerg) openresty_shdict_capacity Capacity of the dictionaries shared between workers gauge dict (one for every dictionary) openresty_shdict_free_space Free space of the dictionaries shared between workers gauge dict (one for every dictionary) nginx_metric_errors_total Number of errors of the Lua library that manages the metrics counter none total_response_time_seconds Time needed to send a response to the client (in seconds) Note : To access the service_id and service_system_name labels, you must set the APICAST_EXTENDED_METRICS environment variable to true as described in Section 9.2, "APIcast integration with Prometheus" . histogram service_id , service_system_name upstream_response_time_seconds Response times from upstream servers (in seconds) Note : To access the service_id and service_system_name labels, you must set the APICAST_EXTENDED_METRICS environment variable to true as described in Section 9.2, "APIcast integration with Prometheus" . histogram service_id , service_system_name upstream_status HTTP status from upstream servers Note : To access the service_id and service_system_name labels, you must set the APICAST_EXTENDED_METRICS environment variable to true as described in Section 9.2, "APIcast integration with Prometheus" . counter status , service_id , service_system_name threescale_backend_calls Authorize and report requests to the 3scale backend (Apisonator) counter endpoint ( authrep , auth , report ), status ( 2xx , 4xx , 5xx ) Table 9.3. Prometheus Metrics for 3scale API Management APIcast Batch Policy Metric Description Type Labels apicast_status Number of response status sent by APIcast to client counter status batching_policy_auths_cache_hits Hits in the auths cache of the 3scale batching policy counter none batching_policy_auths_cache_misses Misses in the auths cache of the 3scale batching policy counter none content_caching Number of requests that go through content caching policy counter status ( MISS , BYPASS , EXPIRED , STALE , UPDATING , REVALIDATED , HIT )
[ "http_requests_total[5m]", "http_requests_total{job=\"integration\"}[5m]", "oc set env deployment/apicast APICAST_EXTENDED_METRICS=true" ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/administering_the_api_gateway/prometheus-3scale-apicast
10.4. Configure 802.1Q VLAN Tagging Using the Command Line
10.4. Configure 802.1Q VLAN Tagging Using the Command Line In Red Hat Enterprise Linux 7, the 8021q module is loaded by default. If necessary, you can make sure that the module is loaded by issuing the following command as root : To display information about the module, issue the following command: See the modprobe(8) man page for more command options. 10.4.1. Setting Up 802.1Q VLAN Tagging Using ifcfg Files Configure the parent interface in /etc/sysconfig/network-scripts/ifcfg- device_name , where device_name is the name of the interface: Configure the VLAN interface configuration in the /etc/sysconfig/network-scripts/ directory. The configuration file name should be the parent interface plus a . character plus the VLAN ID number. For example, if the VLAN ID is 192, and the parent interface is enp1s0 , then the configuration file name should be ifcfg-enp1s0.192 : If there is a need to configure a second VLAN, with for example, VLAN ID 193, on the same interface, enp1s0 , add a new file with the name enp1s0.193 with the VLAN configuration details. Restart the networking service in order for the changes to take effect. As root issue the following command: 10.4.2. Configure 802.1Q VLAN Tagging Using ip Commands To create an 802.1Q VLAN interface on Ethernet interface enp1s0 , with name VLAN8 and ID 8 , issue a command as root as follows: To view the VLAN, issue the following command: Note that the ip utility interprets the VLAN ID as a hexadecimal value if it is preceded by 0x and as an octal value if it has a leading 0 . This means that in order to assign a VLAN ID with a decimal value of 22 , you must not add any leading zeros. To remove the VLAN, issue a command as root as follows: To use multiple interfaces belonging to multiple VLANs, create locally enp1s0.1 and enp1s0.2 with the appropriate VLAN ID on top of a physical interface enp1s0 : Note that running a network sniffer on a physical device, you can capture the tagged frames reaching the physical device, even if no VLAN device is configured on top of enp1s0 . For example: Note VLAN interfaces created using ip commands at the command prompt will be lost if the system is shutdown or restarted. To configure VLAN interfaces to be persistent after a system restart, use ifcfg files. See Section 10.4.1, "Setting Up 802.1Q VLAN Tagging Using ifcfg Files"
[ "~]# modprobe --first-time 8021q modprobe: ERROR: could not insert '8021q': Module already in kernel", "~]USD modinfo 8021q", "DEVICE= interface_name TYPE=Ethernet BOOTPROTO=none ONBOOT=yes", "DEVICE=enp1s0.192 BOOTPROTO=none ONBOOT=yes IPADDR=192.168.1.1 PREFIX=24 NETWORK=192.168.1.0 VLAN=yes", "~]# systemctl restart network", "~]# ip link add link enp1s0 name enp1s0.8 type vlan id 8", "~]USD ip -d link show enp1s0.8 4: enp1s0.8@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 52:54:00:ce:5f:6c brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 8 <REORDER_HDR>", "~]# ip link delete enp1s0.8", "~]# ip link add link enp1s0 name enp1s0.1 type vlan id 1 ip link set dev enp1s0.1 up ~]# ip link add link enp1s0 name enp1s0.2 type vlan id 2 ip link set dev enp1s0.2 up", "tcpdump -nnei enp1s0 -vvv" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-Configure_802_1Q_VLAN_Tagging_Using_the_Command_Line
Chapter 6. Registering Hosts to Satellite
Chapter 6. Registering Hosts to Satellite When you install Satellite Server and Capsule Server, you must then register the hosts on EC2 instances to Satellite. For more information, see Registering Hosts in Managing hosts .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/deploying_red_hat_satellite_on_amazon_web_services/aws-registering-hosts
Chapter 2. Enabling view-only access for your private automation hub
Chapter 2. Enabling view-only access for your private automation hub By enabling view-only access, you can grant access for users to view collections or namespaces on your private automation hub without the need for them to log in. View-only access allows you to share content with unauthorized users while restricting their ability to only view or download source code, without permissions to edit anything on your private automation hub. Enable view-only access for your private automation hub by editing the inventory file found on your Red Hat Ansible Automation Platform installer. If you are installing a new instance of Ansible Automation Platform, follow these steps to add the automationhub_enable_unauthenticated_collection_access and automationhub_enable_unauthenticated_collection_download parameters to your inventory file along with your other installation configurations: If you are updating an existing Ansible Automation Platform installation to include view-only access, add the automationhub_enable_unauthenticated_collection_access and automationhub_enable_unauthenticated_collection_download parameters to your inventory file then run the setup.sh script to apply the updates: Procedure Navigate to the installer. Bundled installer USD cd ansible-automation-platform-setup-bundle-<latest-version> Online installer USD cd ansible-automation-platform-setup-<latest-version> Open the inventory file with a text editor. Add the automationhub_enable_unauthenticated_collection_access and automationhub_enable_unauthenticated_collection_download parameters to the inventory file and set both to True , following the example below: [all:vars] automationhub_enable_unauthenticated_collection_access = True 1 automationhub_enable_unauthenticated_collection_download = True 2 1 Allows unauthorized users to view collections 2 Allows unathorized users to download collections Run the setup.sh script. The installer will now enable view-only access to your automation hub. Verification Once the installation completes, you can verify that you have view-only access on your private automation hub by attempting to view content on your automation hub without logging in. Navigate to your private automation hub. On the login screen, click View only mode . Verify that you are able to view content on your automation hub, such as namespaces or collections, without having to log in.
[ "cd ansible-automation-platform-setup-bundle-<latest-version>", "cd ansible-automation-platform-setup-<latest-version>", "[all:vars] automationhub_enable_unauthenticated_collection_access = True 1 automationhub_enable_unauthenticated_collection_download = True 2" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/managing_user_access_in_private_automation_hub/assembly-view-only-access
Chapter 15. Uninstalling Logging
Chapter 15. Uninstalling Logging You can remove logging from your OpenShift Container Platform cluster by removing installed Operators and related custom resources (CRs). 15.1. Uninstalling the logging You can stop aggregating logs by deleting the Red Hat OpenShift Logging Operator and the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. Procedure Go to the Administration Custom Resource Definitions page, and click ClusterLogging . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and click Delete ClusterLogging . Go to the Administration Custom Resource Definitions page. Click the options menu to ClusterLogging , and select Delete Custom Resource Definition . Warning Deleting the ClusterLogging CR does not remove the persistent volume claims (PVCs). To delete the remaining PVCs, persistent volumes (PVs), and associated data, you must take further action. Releasing or deleting PVCs can delete PVs and cause data loss. If you have created a ClusterLogForwarder CR, click the options menu to ClusterLogForwarder , and then click Delete Custom Resource Definition . Go to the Operators Installed Operators page. Click the options menu to the Red Hat OpenShift Logging Operator, and then click Uninstall Operator . Optional: Delete the openshift-logging project. Warning Deleting the openshift-logging project deletes everything in that namespace, including any persistent volume claims (PVCs). If you want to preserve logging data, do not delete the openshift-logging project. Go to the Home Projects page. Click the options menu to the openshift-logging project, and then click Delete Project . Confirm the deletion by typing openshift-logging in the dialog box, and then click Delete . 15.2. Deleting logging PVCs To keep persistent volume claims (PVCs) for reuse with other pods, keep the labels or PVC names that you need to reclaim the PVCs. If you do not want to keep the PVCs, you can delete them. If you want to recover storage space, you can also delete the persistent volumes (PVs). Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. Procedure Go to the Storage Persistent Volume Claims page. Click the options menu to each PVC, and select Delete Persistent Volume Claim . 15.3. Uninstalling Loki Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. If you have not already removed the Red Hat OpenShift Logging Operator and related resources, you have removed references to LokiStack from the ClusterLogging custom resource. Procedure Go to the Administration Custom Resource Definitions page, and click LokiStack . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and then click Delete LokiStack . Go to the Administration Custom Resource Definitions page. Click the options menu to LokiStack , and select Delete Custom Resource Definition . Delete the object storage secret. Go to the Operators Installed Operators page. Click the options menu to the Loki Operator, and then click Uninstall Operator . Optional: Delete the openshift-operators-redhat project. Important Do not delete the openshift-operators-redhat project if other global Operators are installed in this namespace. Go to the Home Projects page. Click the options menu to the openshift-operators-redhat project, and then click Delete Project . Confirm the deletion by typing openshift-operators-redhat in the dialog box, and then click Delete . 15.4. Uninstalling Elasticsearch Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. If you have not already removed the Red Hat OpenShift Logging Operator and related resources, you must remove references to Elasticsearch from the ClusterLogging custom resource. Procedure Go to the Administration Custom Resource Definitions page, and click Elasticsearch . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and then click Delete Elasticsearch . Go to the Administration Custom Resource Definitions page. Click the options menu to Elasticsearch , and select Delete Custom Resource Definition . Delete the object storage secret. Go to the Operators Installed Operators page. Click the options menu to the OpenShift Elasticsearch Operator, and then click Uninstall Operator . Optional: Delete the openshift-operators-redhat project. Important Do not delete the openshift-operators-redhat project if other global Operators are installed in this namespace. Go to the Home Projects page. Click the options menu to the openshift-operators-redhat project, and then click Delete Project . Confirm the deletion by typing openshift-operators-redhat in the dialog box, and then click Delete . 15.5. Deleting Operators from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift CLI ( oc ) is installed on your workstation. Procedure Ensure the latest version of the subscribed operator (for example, serverless-operator ) is identified in the currentCSV field. USD oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV Example output currentCSV: serverless-operator.v1.28.0 Delete the subscription (for example, serverless-operator ): USD oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless Example output subscription.operators.coreos.com "serverless-operator" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless Example output clusterserviceversion.operators.coreos.com "serverless-operator.v1.28.0" deleted Additional resources Reclaiming a persistent volume manually
[ "oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV", "currentCSV: serverless-operator.v1.28.0", "oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless", "subscription.operators.coreos.com \"serverless-operator\" deleted", "oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless", "clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/logging/cluster-logging-uninstall
Chapter 7. Installing the Migration Toolkit for Containers in a restricted network environment
Chapter 7. Installing the Migration Toolkit for Containers in a restricted network environment You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 and 4 in a restricted network environment by performing the following procedures: Create a mirrored Operator catalog . This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the Operator on the source cluster. Install the Migration Toolkit for Containers Operator on the OpenShift Container Platform 4.13 target cluster by using Operator Lifecycle Manager. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a source cluster or on a remote cluster . Install the legacy Migration Toolkit for Containers Operator on the OpenShift Container Platform 3 source cluster from the command line interface. Configure object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 7.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.9. MTC 1.8 only supports migrations from OpenShift Container Platform 4.10 and later. Table 7.1. MTC compatibility: Migrating from a legacy or a modern platform Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.0 to 4.5 OpenShift Container Platform 4.6 to 4.9 OpenShift Container Platform 4.10 or later Stable MTC version MTC v.1.7. z MTC v.1.7. z MTC v.1.7. z MTC v.1.8. z Installation Legacy MTC v.1.7. z operator: Install manually with the operator.yml file. [ IMPORTANT ] This cluster cannot be the control cluster. Install with OLM, release channel release-v1.7 Install with OLM, release channel release-v1.8 Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC v.1.7. z , if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 7.2. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.13 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.13 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must create an Operator catalog from a mirror image in a local registry. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 7.3. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform 3. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must create an image stream secret and copy it to each node in the cluster. You must have a Linux workstation with network access in order to download files from registry.redhat.io . You must create a mirror image of the Operator catalog. You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OpenShift Container Platform 4.13. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Obtain the Operator image mapping by running the following command: USD grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image. Example output registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file: containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 ... - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 ... env: - name: REGISTRY value: <registry.apps.example.com> 3 1 2 Specify your mirror registry and the sha256 value of the Operator image. 3 Specify your mirror registry. Log in to your OpenShift Container Platform source cluster. Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 7.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.13, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 7.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 7.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 7.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 7.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 7.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 7.4.2.1. NetworkPolicy configuration 7.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 7.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 7.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 7.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 7.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 7.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 7.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 7.5. Configuring a replication repository The Multicloud Object Gateway is the only supported option for a restricted network environment. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 7.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 7.5.2. Retrieving Multicloud Object Gateway credentials Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . 7.5.3. Additional resources Procedure Disconnected environment in the Red Hat OpenShift Data Foundation documentation. MTC workflow About data copy methods Adding a replication repository to the MTC web console 7.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
[ "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc", "registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator", "containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migrating_from_version_3_to_4/installing-restricted-3-4
Chapter 20. KubeStorageVersionMigrator [operator.openshift.io/v1]
Chapter 20. KubeStorageVersionMigrator [operator.openshift.io/v1] Description KubeStorageVersionMigrator provides information to configure an operator to manage kube-storage-version-migrator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 20.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 20.1.1. .spec Description Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 20.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 20.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 20.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string reason string status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 20.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 20.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Required group name namespace resource Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 20.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/kubestorageversionmigrators DELETE : delete collection of KubeStorageVersionMigrator GET : list objects of kind KubeStorageVersionMigrator POST : create a KubeStorageVersionMigrator /apis/operator.openshift.io/v1/kubestorageversionmigrators/{name} DELETE : delete a KubeStorageVersionMigrator GET : read the specified KubeStorageVersionMigrator PATCH : partially update the specified KubeStorageVersionMigrator PUT : replace the specified KubeStorageVersionMigrator /apis/operator.openshift.io/v1/kubestorageversionmigrators/{name}/status GET : read status of the specified KubeStorageVersionMigrator PATCH : partially update status of the specified KubeStorageVersionMigrator PUT : replace status of the specified KubeStorageVersionMigrator 20.2.1. /apis/operator.openshift.io/v1/kubestorageversionmigrators HTTP method DELETE Description delete collection of KubeStorageVersionMigrator Table 20.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeStorageVersionMigrator Table 20.2. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigratorList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeStorageVersionMigrator Table 20.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.4. Body parameters Parameter Type Description body KubeStorageVersionMigrator schema Table 20.5. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 201 - Created KubeStorageVersionMigrator schema 202 - Accepted KubeStorageVersionMigrator schema 401 - Unauthorized Empty 20.2.2. /apis/operator.openshift.io/v1/kubestorageversionmigrators/{name} Table 20.6. Global path parameters Parameter Type Description name string name of the KubeStorageVersionMigrator HTTP method DELETE Description delete a KubeStorageVersionMigrator Table 20.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 20.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeStorageVersionMigrator Table 20.9. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeStorageVersionMigrator Table 20.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.11. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeStorageVersionMigrator Table 20.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.13. Body parameters Parameter Type Description body KubeStorageVersionMigrator schema Table 20.14. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 201 - Created KubeStorageVersionMigrator schema 401 - Unauthorized Empty 20.2.3. /apis/operator.openshift.io/v1/kubestorageversionmigrators/{name}/status Table 20.15. Global path parameters Parameter Type Description name string name of the KubeStorageVersionMigrator HTTP method GET Description read status of the specified KubeStorageVersionMigrator Table 20.16. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeStorageVersionMigrator Table 20.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.18. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeStorageVersionMigrator Table 20.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.20. Body parameters Parameter Type Description body KubeStorageVersionMigrator schema Table 20.21. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 201 - Created KubeStorageVersionMigrator schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operator_apis/kubestorageversionmigrator-operator-openshift-io-v1
Chapter 1. Installing the Operating System
Chapter 1. Installing the Operating System Before setting up for specific development needs, the underlying system must be set up. Install Red Hat Enterprise Linux in the Workstation variant. Follow the instructions in the Red Hat Enterprise Linux Installation Guide . While installing, pay attention to software selection . Select the Development and Creative Workstation system profile and enable the installation of Add-ons appropriate for your development needs. The relevant Add-ons are listed in each of the following sections focusing on various types of development. To develop applications that cooperate closely with the Linux kernel such as drivers, enable automatic crash dumping with kdump during the installation. After the system itself is installed, register it and attach the required subscriptions. Follow the instructions in Red Hat Enterprise Linux System Administrator's Guide, Chapter 7., Registering the System and Managing Subscriptions . The following sections list the particular subscriptions that must be attached for the respective type of development. More recent versions of development tools and utilities are available as Red Hat Software Collections. For instructions on accessing Red Hat Software Collections, see Red Hat Software Collections Release Notes, Chapter 2., Installation . Additional Resources Red Hat Enterprise Linux Installation Guide - Subscription Manager Red Hat Subscription Management Red Hat Enterprise Linux 7 Package Manifest
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/setting-up_installing-system
Chapter 2. ConsoleCLIDownload [console.openshift.io/v1]
Chapter 2. ConsoleCLIDownload [console.openshift.io/v1] Description ConsoleCLIDownload is an extension for configuring openshift web console command line interface (CLI) downloads. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleCLIDownloadSpec is the desired cli download configuration. 2.1.1. .spec Description ConsoleCLIDownloadSpec is the desired cli download configuration. Type object Required description displayName links Property Type Description description string description is the description of the CLI download (can include markdown). displayName string displayName is the display name of the CLI download. links array links is a list of objects that provide CLI download link details. links[] object 2.1.2. .spec.links Description links is a list of objects that provide CLI download link details. Type array 2.1.3. .spec.links[] Description Type object Required href Property Type Description href string href is the absolute secure URL for the link (must use https) text string text is the display text for the link 2.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleclidownloads DELETE : delete collection of ConsoleCLIDownload GET : list objects of kind ConsoleCLIDownload POST : create a ConsoleCLIDownload /apis/console.openshift.io/v1/consoleclidownloads/{name} DELETE : delete a ConsoleCLIDownload GET : read the specified ConsoleCLIDownload PATCH : partially update the specified ConsoleCLIDownload PUT : replace the specified ConsoleCLIDownload /apis/console.openshift.io/v1/consoleclidownloads/{name}/status GET : read status of the specified ConsoleCLIDownload PATCH : partially update status of the specified ConsoleCLIDownload PUT : replace status of the specified ConsoleCLIDownload 2.2.1. /apis/console.openshift.io/v1/consoleclidownloads Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConsoleCLIDownload Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleCLIDownload Table 2.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownloadList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleCLIDownload Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body ConsoleCLIDownload schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 201 - Created ConsoleCLIDownload schema 202 - Accepted ConsoleCLIDownload schema 401 - Unauthorized Empty 2.2.2. /apis/console.openshift.io/v1/consoleclidownloads/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the ConsoleCLIDownload Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConsoleCLIDownload Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleCLIDownload Table 2.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.15. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleCLIDownload Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.17. Body parameters Parameter Type Description body Patch schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleCLIDownload Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body ConsoleCLIDownload schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 201 - Created ConsoleCLIDownload schema 401 - Unauthorized Empty 2.2.3. /apis/console.openshift.io/v1/consoleclidownloads/{name}/status Table 2.22. Global path parameters Parameter Type Description name string name of the ConsoleCLIDownload Table 2.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ConsoleCLIDownload Table 2.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.25. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleCLIDownload Table 2.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.27. Body parameters Parameter Type Description body Patch schema Table 2.28. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleCLIDownload Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body ConsoleCLIDownload schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 201 - Created ConsoleCLIDownload schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/console_apis/consoleclidownload-console-openshift-io-v1
5.18. Additional Resources
5.18. Additional Resources The following sources of information provide additional resources regarding firewalld . 5.18.1. Installed Documentation firewalld(1) man page - Describes command options for firewalld . firewalld.conf(5) man page - Contains information to configure firewalld . firewall-cmd(1) man page - Describes command options for the firewalld command-line client. firewall-config(1) man page - Describes settings for the firewall-config tool. firewall-offline-cmd(1) man page - Describes command options for the firewalld offline command-line client. firewalld.icmptype(5) man page - Describes XML configuration files for ICMP filtering. firewalld.ipset(5) man page - Describes XML configuration files for the firewalld IP sets. firewalld.service(5) man page - Describes XML configuration files for firewalld service . firewalld.zone(5) man page - Describes XML configuration files for firewalld zone configuration. firewalld.direct(5) man page - Describes the firewalld direct interface configuration file. firewalld.lockdown-whitelist(5) man page - Describes the firewalld lockdown whitelist configuration file. firewalld.richlanguage(5) man page - Describes the firewalld rich language rule syntax. firewalld.zones(5) man page - General description of what zones are and how to configure them. firewalld.dbus(5) man page - Describes the D-Bus interface of firewalld . 5.18.2. Online Documentation http://www.firewalld.org/ - firewalld home page.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-firewalld-additional_resources
Appendix A. Upgrading From Red Hat Enterprise Linux 6 High Availability Add-On
Appendix A. Upgrading From Red Hat Enterprise Linux 6 High Availability Add-On This appendix provides an overview of upgrading Red Hat Enterprise Linux High Availability Add-On from release 6 to release 7. A.1. Overview of Differences Between Releases The Red Hat Enterprise Linux 7 High Availability Add-On introduces a new suite of technologies that underlie high-availability systems. These technologies are based on Pacemaker and Corosync and they replace the CMAN and RGManager technologies from releases of the High Availability Add-On. Below are some of the differences between the two releases. For a more comprehensive look at the differences between releases, see the appendix titled "Cluster Creation with rgmanager and with Pacemaker" from the Red Hat Enterprise Linux High Availability Add-On Reference . Configuration Files - Previously, cluster configuration was found in the /etc/cluster/cluster.conf file, while cluster configuration in release 7 is in /etc/corosync/corosync.conf for membership and quorum configuration and /var/lib/pacemaker/cib/cib.xml for cluster node and resource configuration. Executable Files - Previously, cluster commands were in ccs by means of a command line, luci for graphical configuration. In Red Hat Enterprise Linux 7 High Availability Add-On, configuration is done by means of pcs at the command line and the pcsd Web UI configuration at the desktop. Starting the Service - Previously, all services including those in High Availability Add-On were performed using the service command to start services and the chkconfig command to configure services to start upon system boot. This had to be configured separately for all cluster services ( rgmanager , cman , and ricci . For example: For Red Hat Enterprise Linux 7 High Availability Add-On, the systemctl controls both manual startup and automated boot-time startup, and all cluster services are grouped in the pcsd.service . For example: User Access - Previously, the root user or a user with proper permissions can access the luci configuration interface. All access requires the ricci password for the node. In Red Hat Enterprise Linux 7 High Availability Add-On, the pcsd Web UI requires that you authenticate as user hacluster , which is the common system user. The root user can set the password for hacluster . Creating Clusters, Nodes and Resources - Previously, creation of nodes were performed with the ccs by means of a command line or with luci graphical interface. Creation of a cluster and adding nodes is a separate process. For example, to create a cluster and add a node by means of the command line, perform the following: In Red Hat Enterprise Linux 7 High Availability Add-On, adding of clusters, nodes, and resources are done by means of pcs at the command line, or the pcsd Web UI. For example, to create a cluster by means of the command line, perform the following: Cluster removal - Previously, administrators removed a cluster by deleting nodes manually from the luci interface or deleting the cluster.conf file from each node In Red Hat Enterprise Linux 7 High Availability Add-On, administrators can remove a cluster by issuing the pcs cluster destroy command.
[ "service rgmanager start chkconfig rgmanager on", "systemctl start pcsd.service systemctl enable pcsd.service pcs cluster start -all", "ccs -h node1.example.com --createcluster examplecluster ccs -h node1.example.com --addnode node2.example.com", "pcs cluster setup examplecluster node1 node2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/ap-upgrade-HAOO
Chapter 1. Architecture of OpenShift AI Self-Managed
Chapter 1. Architecture of OpenShift AI Self-Managed Red Hat OpenShift AI Self-Managed is an Operator that is available in a self-managed environment, such as Red Hat OpenShift Container Platform, or in Red Hat-managed cloud environments such as Red Hat OpenShift Dedicated (with a Customer Cloud Subscription for AWS or GCP), Red Hat OpenShift Service on Amazon Web Services (ROSA Classic or ROSA HCP), or Microsoft Azure Red Hat OpenShift. OpenShift AI integrates the following components and services: At the service layer: OpenShift AI dashboard A customer-facing dashboard that shows available and installed applications for the OpenShift AI environment as well as learning resources such as tutorials, quick starts, and documentation. Administrative users can access functionality to manage users, clusters, notebook images, accelerator profiles, and model-serving runtimes. Data scientists can use the dashboard to create projects to organize their data science work. Model serving Data scientists can deploy trained machine-learning models to serve intelligent applications in production. After deployment, applications can send requests to the model using its deployed API endpoint. Data science pipelines Data scientists can build portable machine learning (ML) workflows with data science pipelines 2.0, using Docker containers. With data science pipelines, data scientists can automate workflows as they develop their data science models. Jupyter (self-managed) A self-managed application that allows data scientists to configure their own notebook server environment and develop machine learning models in JupyterLab. Distributed workloads Data scientists can use multiple nodes in parallel to train machine-learning models or process data more quickly. This approach significantly reduces the task completion time, and enables the use of larger datasets and more complex models. At the management layer: The Red Hat OpenShift AI Operator A meta-operator that deploys and maintains all components and sub-operators that are part of OpenShift AI. Monitoring services Prometheus gathers metrics from OpenShift AI for monitoring purposes. When you install the Red Hat OpenShift AI Operator in the OpenShift cluster, the following new projects are created: The redhat-ods-operator project contains the Red Hat OpenShift AI Operator. The redhat-ods-applications project installs the dashboard and other required components of OpenShift AI. The redhat-ods-monitoring project contains services for monitoring. The rhods-notebooks project is where notebook environments are deployed by default. You or your data scientists must create additional projects for the applications that will use your machine learning models. Do not install independent software vendor (ISV) applications in namespaces associated with OpenShift AI.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed/architecture-of-openshift-ai-self-managed_install
9.11. Viewing Certificates and CRLs Published to File
9.11. Viewing Certificates and CRLs Published to File Certificates and CRLs can be published to two types of files: base-64 encoded or DER-encoded. The content of these files can be viewed by converting the files to pretty-print format using the dumpasn1 tool or the PrettyPrintCert or PrettyPrintCrl tool. To view the content in a base-64 encoded file: Convert the base-64 file to binary. For example: Use the PrettyPrintCert or PrettyPrintCrl tool to convert the binary file to pretty-print format. For example: To view the content of a DER-encoded file, simply run the dumpasn1 , PrettyPrintCert , or PrettyPrintCrl tool with the DER-encoded file. For example:
[ "AtoB /tmp/example.b64 /tmp/example.bin", "PrettyPrintCert example.bin example.cert", "PrettyPrintCrl example.der example.crl" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Viewing_Certificates_and_CRLs_Published_to_File
Chapter 1. Working with kernel modules
Chapter 1. Working with kernel modules This Chapter explains: What is a kernel module. How to use the kmod utilities to manage modules and their dependencies. How to configure module parameters to control behavior of the kernel modules. How to load modules at boot time. Note In order to use the kernel module utilities described in this chapter, first ensure the kmod package is installed on your system by running, as root: 1.1. What is a kernel module? The Linux kernel is monolithic by design. However, it is compiled with optional or additional modules as required by each use case. This means that you can extend the kernel's capabilities through the use of dynamically-loaded kernel modules . A kernel module can provide: A device driver which adds support for new hardware. Support for a file system such as GFS2 or NFS . Like the kernel itself, modules can take parameters that customize their behavior. Though the default parameters work well in most cases. In relation to kernel modules, user-space tools can do the following operations: Listing modules currently loaded into a running kernel. Querying all available modules for available parameters and module-specific information. Loading or unloading (removing) modules dynamically into or from a running kernel. Many of these utilities, which are provided by the kmod package, take module dependencies into account when performing operations. As a result, manual dependency-tracking is rarely necessary. On modern systems, kernel modules are automatically loaded by various mechanisms when needed. However, there are occasions when it is necessary to load or unload modules manually. For example, when one module is preferred over another although either is able to provide basic functionality, or when a module performs unexpectedly. 1.2. Kernel module dependencies Certain kernel modules sometimes depend on one or more other kernel modules. The /lib/modules/<KERNEL_VERSION>/modules.dep file contains a complete list of kernel module dependencies for the respective kernel version. The dependency file is generated by the depmod program, which is a part of the kmod package. Many of the utilities provided by kmod take module dependencies into account when performing operations so that manual dependency-tracking is rarely necessary. Warning The code of kernel modules is executed in kernel-space in the unrestricted mode. Because of this, you should be mindful of what modules you are loading. Additional resources For more information about /lib/modules/<KERNEL_VERSION>/modules.dep , refer to the modules.dep(5) manual page. For further details including the synopsis and options of depmod , see the depmod(8) manual page. 1.3. Listing currently-loaded modules You can list all kernel modules that are currently loaded into the kernel by running the lsmod command, for example: The lsmod output specifies three columns: Module The name of a kernel module currently loaded in memory. Size The amount of memory the kernel module uses in kilobytes. Used by A decimal number representing how many dependencies there are on the Module field. A comma separated string of dependent Module names. Using this list, you can first unload all the modules depending on the module you want to unload. Finally, note that lsmod output is less verbose and considerably easier to read than the content of the /proc/modules pseudo-file. 1.4. Displaying information about a module You can display detailed information about a kernel module using the modinfo <MODULE_NAME> command. Note When entering the name of a kernel module as an argument to one of the kmod utilities, do not append a .ko extension to the end of the name. Kernel module names do not have extensions; their corresponding files do. Example 1.1. Listing information about a kernel module with lsmod To display information about the e1000e module, which is the Intel PRO/1000 network driver, enter the following command as root : # modinfo e1000e filename: /lib/modules/3.10.0-121.el7.x86_64/kernel/drivers/net/ethernet/intel/e1000e/e1000e.ko version: 2.3.2-k license: GPL description: Intel(R) PRO/1000 Network Driver author: Intel Corporation, 1.5. Loading kernel modules at system runtime The optimal way to expand the functionality of the Linux kernel is by loading kernel modules. The following procedure describes how to use the modprobe command to find and load a kernel module into the currently running kernel. Prerequisites Root permissions. The kmod package is installed. The respective kernel module is not loaded. To ensure this is the case, see Listing Currently Loaded Modules . Procedure Select a kernel module you want to load. The modules are located in the /lib/modules/USD(uname -r)/kernel/<SUBSYSTEM>/ directory. Load the relevant kernel module: Note When entering the name of a kernel module, do not append the .ko.xz extension to the end of the name. Kernel module names do not have extensions; their corresponding files do. Optionally, verify the relevant module was loaded: If the module was loaded correctly, this command displays the relevant kernel module. For example: Important The changes described in this procedure will not persist after rebooting the system. For information on how to load kernel modules to persist across system reboots, see Loading kernel modules automatically at system boot time . Additional resources For further details about modprobe , see the modprobe(8) manual page. 1.6. Unloading kernel modules at system runtime At times, you find that you need to unload certain kernel modules from the running kernel. The following procedure describes how to use the modprobe command to find and unload a kernel module at system runtime from the currently loaded kernel. Prerequisites Root permissions. The kmod package is installed. Procedure Execute the lsmod command and select a kernel module you want to unload. If a kernel module has dependencies, unload those prior to unloading the kernel module. For details on identifying modules with dependencies, see Listing Currently Loaded Modules and Kernel module dependencies . Unload the relevant kernel module: When entering the name of a kernel module, do not append the .ko.xz extension to the end of the name. Kernel module names do not have extensions; their corresponding files do. Warning Do not unload kernel modules when they are used by the running system. Doing so can lead to an unstable or non-operational system. Optionally, verify the relevant module was unloaded: If the module was unloaded successfully, this command does not display any output. Important After finishing this procedure, the kernel modules that are defined to be automatically loaded on boot, will not stay unloaded after rebooting the system. For information on how to counter this outcome, see Preventing kernel modules from being automatically loaded at system boot time . Additional resources For further details about modprobe , see the modprobe(8) manual page. 1.7. Loading kernel modules automatically at system boot time The following procedure describes how to configure a kernel module so that it is loaded automatically during the boot process. Prerequisites Root permissions. The kmod package is installed. Procedure Select a kernel module you want to load during the boot process. The modules are located in the /lib/modules/USD(uname -r)/kernel/<SUBSYSTEM>/ directory. Create a configuration file for the module: Note When entering the name of a kernel module, do not append the .ko.xz extension to the end of the name. Kernel module names do not have extensions; their corresponding files do. Optionally, after reboot, verify the relevant module was loaded: The example command above should succeed and display the relevant kernel module. Important The changes described in this procedure will persist after rebooting the system. Additional resources For further details about loading kernel modules during the boot process, see the modules-load.d(5) manual page. 1.8. Preventing kernel modules from being automatically loaded at system boot time The following procedure describes how to add a kernel module to a denylist so that it will not be automatically loaded during the boot process. Prerequisites Root permissions. The kmod package is installed. Ensure that a kernel module in a denylist is not vital for your current system configuration. Procedure Select a kernel module that you want to put in a denylist: The lsmod command displays a list of modules loaded to the currently running kernel. Alternatively, identify an unloaded kernel module you want to prevent from potentially loading. All kernel modules are located in the /lib/modules/<KERNEL_VERSION>/kernel/<SUBSYSTEM>/ directory. Create a configuration file for a denylist: The example shows the contents of the blacklist.conf file, edited by the vim editor. The blacklist line ensures that the relevant kernel module will not be automatically loaded during the boot process. The blacklist command, however, does not prevent the module from being loaded as a dependency for another kernel module that is not in a denylist. Therefore the install line causes the /bin/false to run instead of installing a module. The lines starting with a hash sign are comments to make the file more readable. Note When entering the name of a kernel module, do not append the .ko.xz extension to the end of the name. Kernel module names do not have extensions; their corresponding files do. Create a backup copy of the current initial ramdisk image before rebuilding: The command above creates a backup initramfs image in case the new version has an unexpected problem. Alternatively, create a backup copy of other initial ramdisk image which corresponds to the kernel version for which you want to put kernel modules in a denylist: Generate a new initial ramdisk image to reflect the changes: If you are building an initial ramdisk image for a different kernel version than you are currently booted into, specify both target initramfs and kernel version: Reboot the system: Important The changes described in this procedure will take effect and persist after rebooting the system. If you improperly put a key kernel module in a denylist, you can face an unstable or non-operational system. Additional resources For further details concerning the dracut utility, refer to the dracut(8) manual page. For more information on preventing automatic loading of kernel modules at system boot time on Red Hat Enterprise Linux 8 and earlier versions, see How do I prevent a kernel module from loading automatically? 1.9. Signing kernel modules for secure boot Red Hat Enterprise Linux 7 includes support for the UEFI Secure Boot feature, which means that Red Hat Enterprise Linux 7 can be installed and run on systems where UEFI Secure Boot is enabled. Note that Red Hat Enterprise Linux 7 does not require the use of Secure Boot on UEFI systems. If Secure Boot is enabled, the UEFI operating system boot loaders, the Red Hat Enterprise Linux kernel, and all kernel modules must be signed with a private key and authenticated with the corresponding public key. If they are not signed and authenticated, the system will not be allowed to finish the booting process. The Red Hat Enterprise Linux 7 distribution includes: Signed boot loaders Signed kernels Signed kernel modules In addition, the signed first-stage boot loader and the signed kernel include embedded Red Hat public keys. These signed executable binaries and embedded keys enable Red Hat Enterprise Linux 7 to install, boot, and run with the Microsoft UEFI Secure Boot Certification Authority keys that are provided by the UEFI firmware on systems that support UEFI Secure Boot. Note Not all UEFI-based systems include support for Secure Boot. The information provided in the following sections describes the steps to self-sign privately built kernel modules for use with Red Hat Enterprise Linux 7 on UEFI-based build systems where Secure Boot is enabled. These sections also provide an overview of available options for importing your public key into a target system where you want to deploy your kernel modules. To sign and load kernel modules, you need to: Have the relevant utilities installed on your system . Authenticate a kernel module . Generate a public and private key pair . Import the public key on the target system . Sign the kernel module with the private key . Load the signed kernel module . 1.9.1. Prerequisites To be able to sign externally built kernel modules, install the utilities listed in the following table on the build system. Table 1.1. Required utilities Utility Provided by package Used on Purpose openssl openssl Build system Generates public and private X.509 key pair sign-file kernel-devel Build system Perl script used to sign kernel modules perl perl Build system Perl interpreter used to run the signing script mokutil mokutil Target system Optional utility used to manually enroll the public key keyctl keyutils Target system Optional utility used to display public keys in the system key ring Note The build system, where you build and sign your kernel module, does not need to have UEFI Secure Boot enabled and does not even need to be a UEFI-based system. 1.9.2. Kernel module authentication In Red Hat Enterprise Linux 7, when a kernel module is loaded, the module's signature is checked using the public X.509 keys on the kernel's system key ring, excluding keys on the kernel's system black-list key ring. The following sections provide an overview of sources of keys/keyrings, examples of loaded keys from different sources in the system. Also, the user can see what it takes to authenticate a kernel module. 1.9.2.1. Sources for public keys used to authenticate kernel modules During boot, the kernel loads X.509 keys into the system key ring or the system black-list key ring from a set of persistent key stores as shown in the table below. Table 1.2. Sources for system key rings Source of X.509 keys User ability to add keys UEFI Secure Boot state Keys loaded during boot Embedded in kernel No - .system_keyring UEFI Secure Boot "db" Limited Not enabled No Enabled .system_keyring UEFI Secure Boot "dbx" Limited Not enabled No Enabled .system_keyring Embedded in shim.efi boot loader No Not enabled No Enabled .system_keyring Machine Owner Key (MOK) list Yes Not enabled No Enabled .system_keyring If the system is not UEFI-based or if UEFI Secure Boot is not enabled, then only the keys that are embedded in the kernel are loaded onto the system key ring. In that case you have no ability to augment that set of keys without rebuilding the kernel. The system black list key ring is a list of X.509 keys which have been revoked. If your module is signed by a key on the black list then it will fail authentication even if your public key is in the system key ring. You can display information about the keys on the system key rings using the keyctl utility. The following is a shortened example output from a Red Hat Enterprise Linux 7 system where UEFI Secure Boot is not enabled. The following is a shortened example output from a Red Hat Enterprise Linux 7 system where UEFI Secure Boot is enabled. The above output shows the addition of two keys from the UEFI Secure Boot "db" keys as well as the Red Hat Secure Boot (CA key 1) , which is embedded in the shim.efi boot loader. You can also look for the kernel console messages that identify the keys with an UEFI Secure Boot related source. These include UEFI Secure Boot db, embedded shim, and MOK list. 1.9.2.2. Kernel module authentication requirements This section explains what conditions have to be met for loading kernel modules on systems with enabled UEFI Secure Boot functionality. If UEFI Secure Boot is enabled or if the module.sig_enforce kernel parameter has been specified, you can only load signed kernel modules that are authenticated using a key on the system key ring. In addition, the public key must not be on the system black list key ring. If UEFI Secure Boot is disabled and if the module.sig_enforce kernel parameter has not been specified, you can load unsigned kernel modules and signed kernel modules without a public key. This is summarized in the table below. Table 1.3. Kernel module authentication requirements for loading Module signed Public key found and signature valid UEFI Secure Boot state sig_enforce Module load Kernel tainted Unsigned - Not enabled Not enabled Succeeds Yes Not enabled Enabled Fails - Enabled - Fails - Signed No Not enabled Not enabled Succeeds Yes Not enabled Enabled Fails - Enabled - Fails - Signed Yes Not enabled Not enabled Succeeds No Not enabled Enabled Succeeds No Enabled - Succeeds No 1.9.3. Generating a public and private X.509 key pair You need to generate a public and private X.509 key pair to succeed in your efforts of using kernel modules on a Secure Boot-enabled system. You will later use the private key to sign the kernel module. You will also have to add the corresponding public key to the Machine Owner Key (MOK) for Secure Boot to validate the signed module. For instructions to do so, see Section 1.9.4.2, "System administrator manually adding public key to the MOK list" . Some of the parameters for this key pair generation are best specified with a configuration file. Create a configuration file with parameters for the key pair generation: Create an X.509 public and private key pair as shown in the following example: The public key will be written to the my_signing_key_pub .der file and the private key will be written to the my_signing_key .priv file. Enroll your public key on all systems where you want to authenticate and load your kernel module. For details, see Section 1.9.4, "Enrolling public key on target system" . Warning Apply strong security measures and access policies to guard the contents of your private key. In the wrong hands, the key could be used to compromise any system which is authenticated by the corresponding public key. 1.9.4. Enrolling public key on target system When Red Hat Enterprise Linux 7 boots on a UEFI-based system with Secure Boot enabled, the kernel loads onto the system key ring all public keys that are in the Secure Boot db key database, but not in the dbx database of revoked keys. The sections below describe different ways of importing a public key on a target system so that the system key ring is able to use the public key to authenticate a kernel module. 1.9.4.1. Factory firmware image including public key To facilitate authentication of your kernel module on your systems, consider requesting your system vendor to incorporate your public key into the UEFI Secure Boot key database in their factory firmware image. 1.9.4.2. System administrator manually adding public key to the MOK list The Machine Owner Key (MOK) facility feature can be used to expand the UEFI Secure Boot key database. When Red Hat Enterprise Linux 7 boots on a UEFI-enabled system with Secure Boot enabled, the keys on the MOK list are also added to the system key ring in addition to the keys from the key database. The MOK list keys are also stored persistently and securely in the same fashion as the Secure Boot database keys, but these are two separate facilities. The MOK facility is supported by shim.efi , MokManager.efi , grubx64.efi , and the Red Hat Enterprise Linux 7 mokutil utility. Enrolling a MOK key requires manual interaction by a user at the UEFI system console on each target system. Nevertheless, the MOK facility provides a convenient method for testing newly generated key pairs and testing kernel modules signed with them. To add your public key to the MOK list: Request the addition of your public key to the MOK list: You will be asked to enter and confirm a password for this MOK enrollment request. Reboot the machine. The pending MOK key enrollment request will be noticed by shim.efi and it will launch MokManager.efi to allow you to complete the enrollment from the UEFI console. Enter the password you previously associated with this request and confirm the enrollment. Your public key is added to the MOK list, which is persistent. Once a key is on the MOK list, it will be automatically propagated to the system key ring on this and subsequent boots when UEFI Secure Boot is enabled. 1.9.5. Signing kernel module with the private key Assuming you have your kernel module ready: Use a Perl script to sign your kernel module with your private key: Note The Perl script requires that you provide both the files that contain your private and the public key as well as the kernel module file that you want to sign. Your kernel module is in ELF image format and the Perl script computes and appends the signature directly to the ELF image in your kernel module file. The modinfo utility can be used to display information about the kernel module's signature, if it is present. For information on using modinfo , see Section 1.4, "Displaying information about a module" . The appended signature is not contained in an ELF image section and is not a formal part of the ELF image. Therefore, utilities such as readelf will not be able to display the signature on your kernel module. Your kernel module is now ready for loading. Note that your signed kernel module is also loadable on systems where UEFI Secure Boot is disabled or on a non-UEFI system. That means you do not need to provide both a signed and unsigned version of your kernel module. 1.9.6. Loading signed kernel module Once your public key is enrolled and is in the system key ring, use mokutil to add your public key to the MOK list. Then manually load your kernel module with the modprobe command. Optionally, verify that your kernel module will not load before you have enrolled your public key. For details on how to list currently loaded kernel modules, see Section 1.3, "Listing currently-loaded modules" . Verify what keys have been added to the system key ring on the current boot: Since your public key has not been enrolled yet, it should not be displayed in the output of the command. Request enrollment of your public key: Reboot, and complete the enrollment at the UEFI console: Verify the keys on the system key ring again: Copy the module into the /extra/ directory of the kernel you want: Update the modular dependency list: Load the kernel module and verify that it was successfully loaded: Optionally, to load the module on boot, add it to the /etc/modules-loaded.d/my_module.conf file:
[ "# yum install kmod", "# lsmod Module Size Used by tcp_lp 12663 0 bnep 19704 2 bluetooth 372662 7 bnep rfkill 26536 3 bluetooth fuse 87661 3 ebtable_broute 12731 0 bridge 110196 1 ebtable_broute stp 12976 1 bridge llc 14552 2 stp,bridge ebtable_filter 12827 0 ebtables 30913 3 ebtable_broute,ebtable_nat,ebtable_filter ip6table_nat 13015 1 nf_nat_ipv6 13279 1 ip6table_nat iptable_nat 13011 1 nf_conntrack_ipv4 14862 4 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 nf_nat_ipv4 13263 1 iptable_nat nf_nat 21798 4 nf_nat_ipv4,nf_nat_ipv6,ip6table_nat,iptable_nat [output truncated]", "modinfo e1000e filename: /lib/modules/3.10.0-121.el7.x86_64/kernel/drivers/net/ethernet/intel/e1000e/e1000e.ko version: 2.3.2-k license: GPL description: Intel(R) PRO/1000 Network Driver author: Intel Corporation,", "modprobe < MODULE_NAME >", "lsmod | grep < MODULE_NAME >", "lsmod | grep serio_raw serio_raw 16384 0", "modprobe -r < MODULE_NAME >", "lsmod | grep < MODULE_NAME >", "echo < MODULE_NAME > > /etc/modules-load.d/< MODULE_NAME >.conf", "lsmod | grep < MODULE_NAME >", "lsmod Module Size Used by fuse 126976 3 xt_CHECKSUM 16384 1 ipt_MASQUERADE 16384 1 uinput 20480 1 xt_conntrack 16384 1 ...", "vim /etc/modprobe.d/blacklist.conf # Blacklists < KERNEL_MODULE_1 > blacklist < MODULE_NAME_1 > install < MODULE_NAME_1 > /bin/false # Blacklists < KERNEL_MODULE_2 > blacklist < MODULE_NAME_2 > install < MODULE_NAME_2 > /bin/false # Blacklists < KERNEL_MODULE_n > blacklist < MODULE_NAME_n > install < MODULE_NAME_n > /bin/false ...", "cp /boot/initramfs-USD(uname -r).img /boot/initramfs-USD(uname -r).bak.USD(date +%m-%d-%H%M%S).img", "cp /boot/initramfs-< SOME_VERSION >.img /boot/initramfs-< SOME_VERSION >.img.bak.USD(date +%m-%d-%H%M%S)", "dracut -f -v", "dracut -f -v /boot/initramfs-< TARGET_VERSION >.img < CORRESPONDING_TARGET_KERNEL_VERSION >", "reboot", "keyctl list %:.system_keyring 3 keys in keyring: ...asymmetric: Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87 ...asymmetric: Red Hat Enterprise Linux kernel signing key: 4249689eefc77e95880b ...asymmetric: Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b7", "keyctl list %:.system_keyring 6 keys in keyring: ...asymmetric: Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87 ...asymmetric: Red Hat Secure Boot (CA key 1): 4016841644ce3a810408050766e8f8a29 ...asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed ...asymmetric: Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e ...asymmetric: Red Hat Enterprise Linux kernel signing key: 4249689eefc77e95880b ...asymmetric: Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b7", "dmesg | grep 'EFI: Loaded cert' [5.160660] EFI: Loaded cert 'Microsoft Windows Production PCA 2011: a9290239 [5.160674] EFI: Loaded cert 'Microsoft Corporation UEFI CA 2011: 13adbf4309b [5.165794] EFI: Loaded cert 'Red Hat Secure Boot (CA key 1): 4016841644ce3a8", "cat << EOF > configuration_file.config [ req ] default_bits = 4096 distinguished_name = req_distinguished_name prompt = no string_mask = utf8only x509_extensions = myexts [ req_distinguished_name ] O = Organization CN = Organization signing key emailAddress = E-mail address [ myexts ] basicConstraints=critical,CA:FALSE keyUsage=digitalSignature subjectKeyIdentifier=hash authorityKeyIdentifier=keyid EOF", "openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv", "mokutil --import my_signing_key_pub.der", "perl /usr/src/kernels/USD(uname -r)/scripts/sign-file sha256 my_signing_key.priv my_signing_key_pub.der my_module.ko", "keyctl list %:.system_keyring", "mokutil --import my_signing_key_pub.der", "reboot", "keyctl list %:.system_keyring", "cp my_module.ko /lib/modules/USD(uname -r)/extra/", "depmod -a", "modprobe -v my_module lsmod | grep my_module", "echo \"my_module\" > /etc/modules-load.d/my_module.conf" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/kernel_administration_guide/chap-Documentation-Kernel_Administration_Guide-Working_with_kernel_modules
10.4. Setting Cache Cull Limits
10.4. Setting Cache Cull Limits The cachefilesd daemon works by caching remote data from shared file systems to free space on the disk. This could potentially consume all available free space, which could be bad if the disk also housed the root partition. To control this, cachefilesd tries to maintain a certain amount of free space by discarding old objects (i.e. accessed less recently) from the cache. This behavior is known as cache culling . Cache culling is done on the basis of the percentage of blocks and the percentage of files available in the underlying file system. There are six limits controlled by settings in /etc/cachefilesd.conf : brun N % (percentage of blocks) , frun N % (percentage of files) If the amount of free space and the number of available files in the cache rises above both these limits, then culling is turned off. bcull N % (percentage of blocks), fcull N % (percentage of files) If the amount of available space or the number of files in the cache falls below either of these limits, then culling is started. bstop N % (percentage of blocks), fstop N % (percentage of files) If the amount of available space or the number of available files in the cache falls below either of these limits, then no further allocation of disk space or files is permitted until culling has raised things above these limits again. The default value of N for each setting is as follows: brun / frun - 10% bcull / fcull - 7% bstop / fstop - 3% When configuring these settings, the following must hold true: 0 <= bstop < bcull < brun < 100 0 <= fstop < fcull < frun < 100 These are the percentages of available space and available files and do not appear as 100 minus the percentage displayed by the df program. Important Culling depends on both b xxx and f xxx pairs simultaneously; they can not be treated separately.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/fscacheculllimit
Chapter 1. Support policy
Chapter 1. Support policy Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.392/openjdk8-support-policy
Chapter 83. usage
Chapter 83. usage This chapter describes the commands under the usage command. 83.1. usage list List resource usage per project Usage: Table 83.1. Command arguments Value Summary -h, --help Show this help message and exit --start <start> Usage range start date, ex 2012-01-20 (default: 4 weeks ago) --end <end> Usage range end date, ex 2012-01-20 (default: tomorrow) Table 83.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 83.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 83.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 83.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 83.2. usage show Show resource usage for a single project Usage: Table 83.6. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Name or id of project to show usage for --start <start> Usage range start date, ex 2012-01-20 (default: 4 weeks ago) --end <end> Usage range end date, ex 2012-01-20 (default: tomorrow) Table 83.7. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 83.8. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 83.9. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 83.10. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack usage list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--start <start>] [--end <end>]", "openstack usage show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--project <project>] [--start <start>] [--end <end>]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/usage
4.288. seabios
4.288. seabios 4.288.1. RHBA-2011:1680 - seabios bug fix update An updated seabios package that fixes several bugs is now available for Red Hat Enterprise Linux 6. The seabios package contains a legacy BIOS implementation which can be used as a coreboot payload. Bug Fixes BZ# 727328 Previously, the smp_mtrr array was not large enough to hold all 31 entries of model-specific registers (MSRs) with current qemu-kvm implementations. As a consequence, installation of a Windows Server 2008 32-bit guest failed when more than one virtual CPU was allocated in it. With this update, the size of the smp_mtrr array has been increased to 32 and now Windows Server 2008 guests install successfully in the described scenario. BZ# 733028 On reboot, reinitialization of the USB HID (Human Interface Device) devices was not done before seabios was setting up timers. Consequently, when the "shutdown -r now" command was executed in a guest, the guest became unresponsive, could not be rebooted, and the "usb-kbd: warning: key event queue full" error message was returned. A patch has been provided to address this issue and the guest now reboots properly in the described scenario. BZ# 630975 Previously, seabios only supported address space up to 40 bits per one address. As a consequence, guests with 1 TB of RAM could not boot. A patch has been provided to address this issue, which raises the memory space limit up to 48 bits, thus supporting up to 281 TB of virtual memory in a guest. BZ# 736522 Previously, the S3/S4 power state capability was advertised in the DSDT (Differentiated System Description Table) tables. This could have caused various power management issues. With this update, the S3/S4 capability has been removed from the DSDT tables, thus fixing this bug. BZ# 750191 Previosly, Windows guests failed to generate memory dumps on NMIs (Non-Maskable Interrupts), even if they were properly configured to. With this update, a NMI descriptor has been added to seabios, and Windows guests now generate memory dumps on NMIs correctly. All users of seabios are advised to upgrade to this updated package, which fixes these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/seabios
Architecture Guide
Architecture Guide Red Hat Ceph Storage 5 Guide on Red Hat Ceph Storage Architecture Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/architecture_guide/index
Chapter 8. Entry attribute reference
Chapter 8. Entry attribute reference The attributes listed in this reference are manually assigned or available to directory entries. The attributes are listed in alphabetical order with their definition, syntax, and OID. 8.1. abstract The abstract attribute contains an abstract for a document entry. OID 0.9.2342.19200300.102.1.9 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 8.2. accessTo This attribute defines what specific hosts or servers a user is allowed to access. OID 5.3.6.1.1.1.1.1 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in nss_ldap/pam_ldap 8.3. accountInactivityLimit The accountInactivityLimit attribute sets the time period, in seconds, from the last login time of an account before that account is locked for inactivity. OID 1.3.6.1.4.1.11.1.3.2.1.3 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 8.4. acctPolicySubentry The acctPolicySubentry attribute identifies any entry which belongs to an account policy (specifically, an account lockout policy). The value of this attribute points to the account policy which is applied to the entry. This can be set on an individual user entry or on a CoS template entry or role entry. OID 1.3.6.1.4.1.11.1.3.2.1.2 Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 8.5. administratorContactInfo This attribute contains the contact information for the LDAP or server administrator. OID 2.16.840.1.113730.3.1.74 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.6. adminRole This attribute contains the role assigned to the user identified in the entry. OID 2.16.840.1.113730.3.1.601 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape Administration Services 8.7. adminUrl This attribute contains the URL of the Administration Server. OID 2.16.840.1.113730.3.1.75 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.8. aliasedObjectName The aliasedObjectName attribute is used by Directory Server to identify alias entries. This attribute contains the DN (distinguished name) for the entry for which this entry is the alias. For example: aliasedObjectName: uid=jdoe,ou=people,dc=example,dc=com OID 2.5.4.1 Syntax DN Multi- or Single-Valued Single-valued Defined in RFC 2256 8.9. associatedDomain The associatedDomain attribute contains the DNS domain associated with the entry in the directory tree. For example, the entry with the distinguished name c=US,o=Example Corporation has the associated domain of EC.US . These domains should be represented in RFC 822 order. associatedDomain:US OID 0.9.2342.19200300.100.1.37 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.10. associatedName The associatedName identifies an organizational directory tree entry associated with a DNS domain. For example: associatedName: c=us OID 0.9.2342.19200300.100.1.38 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.11. attributeTypes This attribute is used in a schema file to identify an attribute defined within the subschema. OID 2.5.21.5 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 8.12. audio The audio attribute contains a sound file using a binary format. This attribute uses a u-law encoded sound data. For example: audio:: AAAAAA== OID 0.9.2342.19200300.100.1.55 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.13. authorCn The authorCn attribute contains the common name of the document's author. For example: authorCn: John Smith OID 0.9.2342.19200300.102.1.11 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 8.14. authorityRevocationList The authorityRevocationList attribute contains a list of revoked CA certificates. This attribute should be requested and stored in a binary format, like authorityRevocationList;binary . For example: authorityrevocationlist;binary:: AAAAAA== OID 2.5.4.38 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.15. authorSn The authorSn attribute contains the last name or family name of the author of a document entry. For example: authorSn: Smith OID 0.9.2342.19200300.102.1.12 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 8.16. automountInformation This attribute contains information used by the autofs automounter. Note The automountInformation attribute is defined in 60autofs.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 60autofs.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.33 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2307 8.17. bootFile This attribute contains the boot image file name. Note The bootFile attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.24 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2307 8.18. bootParameter This attribute contains the value for rpc.bootparamd . Note The bootParameter attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.23 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2307 8.19. buildingName The buildingName attribute contains the building name associated with the entry. For example: buildingName: 14 OID 0.9.2342.19200300.100.1.48 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.20. businessCategory The businessCategory attribute identifies the type of business in which the entry is engaged. The attribute value should be a broad generalization, such as a corporate division level. For example: businessCategory: Engineering OID 2.5.4.15 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.21. cACertificate The cACertificate attribute contains a CA certificate. The attribute should be requested and stored binary format, such as cACertificate;binary . For example: cACertificate;binary:: AAAAAA== OID 2.5.4.37 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.22. c The countryName , or c , attribute contains the two-character country code to represent the country names. The country codes are defined by the ISO. For example: countryName: GB c: US OID 2.5.4.6 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2256 8.23. carLicense The carLicense attribute contains an entry's automobile license plate number. For example: carLicense: 6ABC246 OID 2.16.840.1.113730.3.1.1 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2798 8.24. certificateRevocationList The certificateRevocationList attribute contains a list of revoked user certificates. The attribute value is to be requested and stored in binary form, as certificateACertificate;binary . For example: certificateRevocationList;binary:: AAAAAA== OID 2.5.4.39 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.25. cn The commonName attribute contains the name of an entry. For user entries, the cn attribute is typically the person's full name. For example: commonName: John Smith cn: Bill Anderson With the LDAPReplica or LDAPServerobject object classes, the cn attribute value has the following format: cn: replicater.example.com:17430/dc%3Dexample%2Cdc%3com OID 2.5.4.3 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.26. co The friendlyCountryName attribute contains a country name; this can be any string. Often, the country is used with the ISO-designated two-letter country code, while the co attribute contains a readable country name. For example: friendlyCountryName: Ireland co: Ireland OID 0.9.2342.19200300.100.1.43 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.27. cosAttribute The cosAttribute contains the name of the attribute for which to generate a value for the CoS. There can be more than one cosAttribute value specified. This attribute is used by all types of CoS definition entries. OID 2.16.840.1.113730.3.1.550 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.28. cosIndirectSpecifier The cosIndirectSpecifier specifies the attribute values used by an indirect CoS to identify the template entry. OID 2.16.840.1.113730.3.1.577 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 8.29. cosPriority The cosPriority attribute specifies which template provides the attribute value when CoS templates compete to provide an attribute value. This attribute represents the global priority of a template. A priority of zero is the highest priority. OID 2.16.840.1.113730.3.1.569 Syntax Integer Multi- or Single-Valued Single-valued Defined in Directory Server 8.30. cosSpecifier The cosSpecifier attribute contains the attribute value used by a classic CoS, which, along with the template entry's DN, identifies the template entry. OID 2.16.840.1.113730.3.1.551 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 8.31. cosTargetTree The cosTargetTree attribute defines the subtrees to which the CoS schema applies. The values for this attribute for the schema and for multiple CoS schema may overlap their target trees arbitrarily. OID 2.16.840.1.113730.3.1.552 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 8.32. cosTemplateDn The cosTemplateDn attribute contains the DN of the template entry which contains a list of the shared attribute values. Changes to the template entry attribute values are automatically applied to all the entries within the scope of the CoS. A single CoS might have more than one template entry associated with it. OID 2.16.840.1.113730.3.1.553 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 8.33. crossCertificatePair The value for the crossCertificatePair attribute must be requested and stored in binary format, such as certificateCertificateRepair;binary . For example: crossCertificatePair;binary:: AAAAAA== OID 2.5.4.40 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.34. dc The dc attribute contains one component of a domain name. For example: dc: example domainComponent: example OID 0.9.2342.19200300.100.1.25 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2247 8.35. deltaRevocationList The deltaRevocationList attribute contains a certificate revocation list (CRL). The attribute value is requested and stored in binary format, such as deltaRevocationList;binary . OID 2.5.4.53 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.36. departmentNumber The departmentNumber attribute contains an entry's department number. For example: departmentNumber: 2604 OID 2.16.840.1.113730.3.1.2 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2798 8.37. description The description attribute provides a human-readable description for an entry. For person or organization object classes, this can be used for the entry's role or work assignment. For example: description: Quality control inspector for the ME2873 product line. OID 2.5.4.13 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.38. destinationIndicator The destinationIndicator attribute contains the city and country associated with the entry. This attribute was once required to provide public telegram service and is generally used in conjunction with the registeredAddress attribute. For example: destinationIndicator: Stow, Ohio, USA OID 2.5.4.27 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.39. displayName The displayName attributes contains the preferred name of a person to use when displaying that person's entry. This is especially useful for showing the preferred name for an entry in a one-line summary list. Since other attribute types, such as cn , are multi-valued, they cannot be used to display a preferred name. For example: displayName: John Smith OID 2.16.840.1.113730.3.1.241 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2798 8.40. dITRedirect The dITRedirect attribute indicates that the object described by one entry now has a newer entry in the directory tree. This attribute may be used when an individual's place of work changes, and the individual acquires a new organizational DN. dITRedirect: cn=jsmith,dc=example,dc=com OID 0.9.2342.19200300.100.1.54 Syntax DN Defined in RFC 1274 8.41. dmdName The dmdName attribute value specifies a directory management domain (DMD), the administrative authority that operates Directory Server. OID 2.5.4.54 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2256 8.42. dn The dn attribute contains an entry's distinguished name. For example: dn: uid=Barbara Jensen,ou=Quality Control,dc=example,dc=com OID 2.5.4.49 Syntax DN Defined in RFC 2256 8.43. dNSRecord The dNSRecord attribute contains DNS resource records, including type A (Address), type MX (Mail Exchange), type NS (Name Server), and type SOA (Start of Authority) resource records. For example: dNSRecord: IN NS ns.uu.net OID 0.9.2342.19200300.100.1.26 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Internet Directory Pilot 8.44. documentAuthor The documentAuthor attribute contains the DN of the author of a document entry. For example: documentAuthor: uid=Barbara Jensen,ou=People,dc=example,dc=com OID 0.9.2342.19200300.100.1.14 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.45. documentIdentifier The documentIdentifier attribute contains a unique identifier for a document. For example: documentIdentifier: L3204REV1 OID 0.9.2342.19200300.100.1.11 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.46. documentLocation The documentLocation attribute contains the location of the original version of a document. For example: documentLocation: Department Library OID 0.9.2342.19200300.100.1.15 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.47. documentPublisher The documentPublisher attribute contains the person or organization who published a document. For example: documentPublisher: Southeastern Publishing OID 0.9.2342.19200300.100.1.56 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 1274 8.48. documentStore The documentStore attribute contains information on where the document is stored. OID 0.9.2342.19200300.102.1.10 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 8.49. documentTitle The documentTitle attribute contains a document's title. For example: documentTitle: Installing Red Hat Directory Server OID 0.9.2342.19200300.100.1.12 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.50. documentVersion The documentVersion attribute contains the current version number for the document. For example: documentVersion: 1.1 OID 0.9.2342.19200300.100.1.13 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.51. drink The favouriteDrink attribute contains a person's favorite beverage. This can be shortened to drink . For example: favouriteDrink: iced tea drink: cranberry juice OID 0.9.2342.19200300.100.1.5 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.52. dSAQuality The dSAQuality attribute contains the rating of the directory system agents' (DSA) quality. This attribute allows a DSA manager to indicate the expected level of availability of the DSA. For example: dSAQuality: high OID 0.9.2342.19200300.100.1.49 Syntax Directory-String Multi- or Single-Valued Single-valued Defined in RFC 1274 8.53. employeeNumber The employeeNumber attribute contains the employee number for the person. For example: employeeNumber: 3441 OID 2.16.840.1.113730.3.1.3 Syntax Directory-String Multi- or Single-Valued Single-valued Defined in RFC 2798 8.54. employeeType The employeeType attribute contains the employment type for the person. For example: employeeType: Full time OID 2.16.840.1.113730.3.1.4 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2798 8.55. enhancedSearchGuide The enhancedSearchGuide attribute contains information used by an X.500 client to construct search filters. For example: enhancedSearchGuide: (uid=bjensen) OID 2.5.4.47 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2798 8.56. fax The facsimileTelephoneNumber attribute contains the entry's facsimile number; this attribute can be abbreviated as fax . For example: facsimileTelephoneNumber: +1 415 555 1212 fax: +1 415 555 1212 OID 2.5.4.23 Syntax TelephoneNumber Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.57. gecos The gecos attribute is used to determine the GECOS field for the user. This is comparable to the cn attribute, although using a gecos attribute allows additional information to be embedded in the GECOS field aside from the common name. This field is also useful if the common name stored in the directory is not the user's full name. gecos: John Smith Note The gecos attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.2 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2307 8.58. generationQualifier The generationQualifier attribute contains the generation qualifier for a person's name, which is usually appended as a suffix to the name. For example: generationQualifier:III OID 2.5.4.44 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.59. gidNumber The gidNumber attribute contains a unique numeric identifier for a group entry or to identify the group for a user entry. This is analogous to the group number in Unix. gidNumber: 100 Note The gidNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.1 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 8.60. givenName The givenName attribute contains an entry's given name, which is usually the first name. For example: givenName: Rachel OID 2.5.4.42 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.61. homeDirectory The homeDirectory attribute contains the path to the user's home directory. homeDirectory: /home/jsmith Note The homeDirectory attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.3 Syntax IA5String Multi- or Single-Valued Single-valued Defined in RFC 2307 8.62. homePhone The homePhone attribute contains the entry's residential phone number. For example: homePhone: 415-555-1234 Note Although RFC 1274 defines both homeTelephoneNumber and homePhone as names for the residential phone number attribute, Directory Server only implements the homePhone name. OID 0.9.2342.19200300.100.1.20 Syntax TelephoneNumber Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.63. homePostalAddress The homePostalAddress attribute contains an entry's home mailing address. Since this attribute generally spans multiple lines, each line break has to be represented by a dollar sign ( USD ). To represent an actual dollar sign ( USD ) or backslash ( \ ) in the attribute value, use the escaped hex values \24 and \5c , respectively. For example: homePostalAddress: 1234 Ridgeway DriveUSDSanta Clara, CAUSD99555 To represent the following string: The dollar (USD) value can be found in the c:\cost file. The entry value is: The dollar (\24) value can be foundUSDin the c:\c5cost file. OID 0.9.2342.19200300.100.1.39 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.64. host The host contains the host name of a computer. For example: host: labcontroller01 OID 0.9.2342.19200300.100.1.9 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.65. houseIdentifier The houseIdentifier contains an identifier for a specific building at a location. For example: houseIdentifier: B105 OID 2.5.4.51 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.66. inetDomainBaseDN This attribute identifies the base DN of user subtree for a DNS domain. OID 2.16.840.1.113730.3.1.690 Syntax DN Multi- or Single-Valued Single-valued Defined in Subscriber interoperability 8.67. inetDomainStatus This attribute shows the current status of the domain. A domain has a status of active , inactive , or deleted . OID 2.16.840.1.113730.3.1.691 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Subscriber interoperability 8.68. inetSubscriberAccountId This attribute contains the a unique attribute used to link the user entry for the subscriber to a billing system. OID 2.16.840.1.113730.3.1.694 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Subscriber interoperability 8.69. inetSubscriberChallenge The inetSubscriberChallenge attribute contains some kind of question or prompt, the challenge phrase, which is used to confirm the identity of the user in the subscriberIdentity attribute. This attribute is used in conjunction with the inetSubscriberResponse attribute, which contains the response to the challenge. OID 2.16.840.1.113730.3.1.695 Syntax IA5String Multi- or Single-Valued Single-valued Defined in Subscriber interoperability 8.70. inetSubscriberResponse The inetSubscriberResponse attribute contains the answer to the challenge question in the inetSubscriberChallenge attribute to verify the user in the subscriberIdentity attribute. OID 2.16.840.1.113730.3.1.696 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Subscriber interoperability 8.71. inetUserHttpURL This attribute contains the web addresses associated with the user. OID 2.16.840.1.113730.3.1.693 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Subscriber interoperability 8.72. inetUserStatus This attribute shows the current status of the user (subscriber). A user has a status of active , inactive , or deleted . OID 2.16.840.1.113730.3.1.692 Syntax DirectoryString Multi- or Single-Valued Single-Valued Defined in Subscriber interoperability 8.73. info The info attribute contains any general information about an object. Avoid using this attribute for specific information and rely instead on specific, possibly custom, attribute types. For example: info: not valid OID 0.9.2342.19200300.100.1.4 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.74. initials The initials contains a person's initials; this does not contain the entry's surname. For example: initials: BAJ Directory Server and Active Directory handle the initials attribute differently. The Directory Server allows a practically unlimited number of characters, while Active Directory has a restriction of six characters. If an entry is synced with a Windows peer and the value of the initials attribute is longer than six characters, then the value is automatically truncated to six characters when it is synchronized. There is no information written to the error log to indicate that synchronization changed the attribute value, either. OID 2.5.4.43 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.75. installationTimeStamp This contains the time that the server instance was installed. OID 2.16.840.1.113730.3.1.73 Syntax DirectoryString Multi- or Single-Valued Multi-Valued Defined in Netscape Administration Services 8.76. internationalISDNNumber The internationalISDNNumber attribute contains the ISDN number of a document entry. This attribute uses the internationally recognized format for ISDN addresses given in CCITT Rec. E. 164. OID 2.5.4.25 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.77. ipHostNumber This contains the IP address for a server. Note The ipHostNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.19 Syntax DirectoryString Multi- or Single-Valued Multi-Valued Defined in RFC 2307 8.78. ipNetmaskNumber This contains the IP netmask for the server. Note The ipHostNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 2.16.840.1.113730.3.1.73 Syntax DirectoryString Multi- or Single-Valued Multi-Valued Defined in RFC 2307 8.79. ipNetworkNumber This identifies the IP network. Note The ipNetworkNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.20 Syntax DirectoryString Multi- or Single-Valued Single-Valued Defined in RFC 2307 8.80. ipProtocolNumber This attribute identifies the IP protocol version number. Note The ipProtocolNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.17 Syntax Integer Multi- or Single-Valued Single-Valued Defined in RFC 2307 8.81. ipServicePort This attribute gives the port used by the IP service. Note The ipServicePort attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.15 Syntax Integer Multi- or Single-Valued Single-Valued Defined in RFC 2307 8.82. ipServiceProtocol This identifies the protocol used by the IP service. Note The ipServiceProtocol attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.16 Syntax DirectoryString Multi- or Single-Valued Multi-Valued Defined in RFC 2307 8.83. janetMailbox The janetMailbox contains a JANET email address, usually for users located in the United Kingdom who do not use RFC 822 email address. Entries with this attribute must also contain the rfc822Mailbox attribute. OID 0.9.2342.19200300.100.1.46 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.84. jpegPhoto The jpegPhoto attribute contains a JPEG photo, a binary value. For example: jpegPhoto:: AAAAAA== OID 0.9.2342.19200300.100.1.60 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2798 8.85. keyWords The keyWord attribute contains keywords associated with the entry. For example: keyWords: directory LDAP X.500 OID 0.9.2342.19200300.102.1.7 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 8.86. knowledgeInformation This attribute is no longer used. OID 2.5.4.2 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.87. labeledURI The labeledURI contains a Uniform Resource Identifier (URI) which is related, in some way, to the entry. Values placed in the attribute should consist of a URI (currently only URLs are supported), optionally followed by one or more space characters and a label. labeledURI: http://home.example.com labeledURI: http://home.example.com Example website OID 1.3.6.1.4.1.250.1.57 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2709 8.88. l The localityName , or l , attribute contains the county, city, or other geographical designation associated with the entry. For example: localityName: Santa Clara l: Santa Clara OID 2.5.4.7 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.89. loginShell The loginShell attribute contains the path to a script that is launched automatically when a user logs into the domain. loginShell: c:\scripts\jsmith.bat Note The loginShell attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.4 Syntax IA5String Multi- or Single-Valued Single-valued Defined in RFC 2307 8.90. macAddress This attribute gives the MAC address for a server or piece of equipment. Note The macAddress attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.22 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2307 8.91. mailAccessDomain This attribute lists the domain which a user can use to access the messaging server. OID 2.16.840.1.113730.3.1.12 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.92. mail The mail attribute contains a user's primary email address. This attribute value is retrieved and displayed by whitepage applications. For example: mail: [email protected] OID 0.9.2342.19200300.100.1.3 Syntax DirectyString Multi- or Single-Valued Single-valued Defined in RFC 1274 8.93. mailAlternateAddress The mailAlternateAddress attribute contains additional email addresses for a user. This attribute does not reflect the default or primary email address; that email address is set by the mail attribute. For example: mailAlternateAddress: [email protected] mailAlternateAddress: [email protected] OID 2.16.840.1.113730.3.1.13 Syntax DirectyString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.94. mailAutoReplyMode This attribute sets whether automatic replies are enabled for the messaging server. OID 2.16.840.1.113730.3.1.14 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.95. mailAutoReplyText This attribute stores the text to used in an auto-reply email. OID 2.16.840.1.113730.3.1.15 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.96. mailDeliveryOption This attribute defines the mail delivery mechanism to use for the mail user. OID 2.16.840.1.113730.3.1.16 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.97. mailEnhancedUniqueMember This attribute contains the DN of a unique member of a mail group. OID 2.16.840.1.113730.3.1.31 Syntax DN Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.98. mailForwardingAddress This attribute contains an email address to which to forward a user's email. OID 2.16.840.1.113730.3.1.17 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.99. mailHost The mailHost attribute contains the host name of a mail server. For example: mailHost: mail.example.com OID 2.16.840.1.113730.3.1.18 Syntax DirectyString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.100. mailMessageStore This identifies the location of a user's email box. OID 2.16.840.1.113730.3.1.19 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.101. mailPreferenceOption The mailPreferenceOption defines whether a user should be included on a mailing list, both electronic and physical. There are three options. 0 Does not appear in mailing lists. 1 Add to any mailing lists. 2 Added only to mailing lists which the provider views as relevant to the user interest. If the attribute is absent, then the default is to assume that the user is not included on any mailing list. This attribute should be interpreted by anyone using the directory to derive mailing lists and its value respected. For example: mailPreferenceOption: 0 OID 0.9.2342.19200300.100.1.47 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 1274 8.102. mailProgramDeliveryInfo This attribute contains any commands to use for programmed mail delivery. OID 2.16.840.1.113730.3.1.20 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.103. mailQuota This attribute sets the amount of disk space allowed for a user's mail box. OID 2.16.840.1.113730.3.1.21 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.104. mailRoutingAddress This attribute contains the routing address to use when forwarding the emails received by the user to another messaging server. OID 2.16.840.1.113730.3.1.24 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.105. manager The manager contains the distinguished name (DN) of the manager for the person. For example: manager: cn=Bill Andersen,ou=Quality Control,dc=example,dc=com OID 0.9.2342.19200300.100.1.10 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.106. member The member attribute contains the distinguished names (DNs) of each member of a group. For example: member: cn=John Smith,dc=example,dc=com OID 2.5.4.31 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.107. memberCertificateDescription This attribute is a multi-valued attribute where each value is a description, a pattern, or a filter matching the subject DN of a certificate, usually a certificate used for TLS client authentication. memberCertificateDescription matches any certificate that contains a subject DN with the same attribute-value assertions (AVAs) as the description. The description may contain multiple ou AVAs. A matching DN must contain those same ou AVAs, in the same order, although it may be interspersed with other AVAs, including other ou AVAs. For any other attribute type (not ou ), there should be at most one AVA of that type in the description. If there are several, all but the last are ignored. A matching DN must contain that same AVA but no other AVA of the same type nearer the root (later, syntactically). AVAs are considered the same if they contain the same attribute description (case-insensitive comparison) and the same attribute value (case-insensitive comparison, leading and trailing whitespace ignored, and consecutive whitespace characters treated as a single space). To be considered a member of a group with the following memberCertificateDescription value, a certificate needs to include ou=x , ou=A , and dc=example , but not dc=company . memberCertificateDescription: {ou=x,ou=A,dc=company,dc=example} To match the group's requirements, a certificate's subject DNs must contain the same ou attribute types in the same order as defined in the memberCertificateDescription attribute. OID 2.16.840.1.113730.3.1.199 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Directory Server 8.108. memberNisNetgroup This attribute merges the attribute values of another netgroup into the current one by listing the name of the merging netgroup. Note The memberNisNetgroup attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.13 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2307 8.109. memberOf This attribute contains the name of a group to which the user is a member. memberOf is the default attribute generated by the MemberOf Plug-in on the user entry of a group member. This attribute is automatically synchronized to the listed member attributes in a group entry, so that displaying group membership for entries is managed by Directory Server. Note This attribute is only synchronized between group entries and the corresponding members' user entries if the MemberOf Plug-in is enabled and is configured to use this attribute. OID 1.2.840.113556.1.2.102 Syntax DN Multi- or Single-Valued Multi-valued Defined in Netscape Delegated Administrator 8.110. memberUid The memberUid attribute contains the login name of the member of a group; this can be different than the DN identified in the member attribute. memberUID: jsmith Note The memberUID attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.12 Syntax IA5String Multi- or Single-Valued Single-valued Defined in RFC 2307 8.111. memberURL This attribute identifies a URL associated with each member of a group. Any type of labeled URL can be used. memberURL: ldap://cn=jsmith,ou=people,dc=example,dc=com OID 2.16.840.1.113730.3.1.198 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Directory Server 8.112. mepManagedBy This attribute contains a pointer in an automatically-generated entry that points back to the DN of the originating entry. This attribute is set by the Managed Entries Plug-in and cannot be modified manually. OID 2.16.840.1.113730.3.1.2086 Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 8.113. mepManagedEntry This attribute contains a pointer to an automatically-generated entry which corresponds to the current entry. This attribute is set by the Managed Entries Plug-in and cannot be modified manually. OID 2.16.840.1.113730.3.1.2087 Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 8.114. mepMappedAttr This attribute sets an attribute in the Managed Entries template entry which must exist in the generated entry. The mapping means that some value of the originating entry is used to supply the given attribute. The values of these attributes will be tokens in the form attribute: USDattr . For example: mepMappedAttr: gidNumber: USDgidNumber As long as the syntax of the expanded token of the attribute does not violate the required attribute syntax, then other terms and strings can be used in the attribute. For example: mepMappedAttr: cn: Managed Group for USDcn OID 2.16.840.1.113730.3.1.2089 Syntax OctetString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.115. mepRDNAttr This attribute sets which attribute to use as the naming attribute in the automatically-generated entry created by the Managed Entries Plug-in. Whatever attribute type is given in the naming attribute should be present in the managed entries template entry as a mepMappedAttr . OID 2.16.840.1.113730.3.1.2090 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 8.116. mepStaticAttr This attribute sets an attribute with a defined value that must be added to the automatically-generated entry managed by the Managed Entries Plug-in. This value will be used for every entry generated by that instance of the Managed Entries Plug-in. mepStaticAttr: posixGroup OID 2.16.840.1.113730.3.1.2088 Syntax OctetString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.117. mgrpAddHeader This attribute contains information about the header in the messages. OID 2.16.840.1.113730.3.1.781 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.118. mgrpAllowedBroadcaster This attribute sets whether to allow the user to send broadcast messages. OID 2.16.840.1.113730.3.1.22 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.119. mgrpAllowedDomain This attribute sets the domains for the mail group. OID 2.16.840.1.113730.3.1.23 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.120. mgrpApprovePassword This attribute sets whether a user must approve a password used to access their email. OID mgrpApprovePassword-oid Syntax IA5String Multi- or Single-Valued Single-valued Defined in Netscape Messaging Server 8.121. mgrpBroadcasterPolicy This attribute defines the policy for broadcasting emails. OID 2.16.840.1.113730.3.1.788 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.122. mgrpDeliverTo This attribute contains information about the delivery destination for email. OID 2.16.840.1.113730.3.1.25 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.123. mgrpErrorsTo This attribute contains information about where to deliver error messages for the messaging server. OID 2.16.840.1.113730.3.1.26 Syntax IA5String Multi- or Single-Valued Single-valued Defined in Netscape Messaging Server 8.124. mgrpModerator This attribute contains the contact name for the mailing list moderator. OID 2.16.840.1.113730.3.1.33 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.125. mgrpMsgMaxSize This attribute sets the maximum size allowed for email messages. OID 2.16.840.1.113730.3.1.32 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape Messaging Server 8.126. mgrpMsgRejectAction This attribute defines what actions the messaging server should take for rejected messages. OID 2.16.840.1.113730.3.1.28 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.127. mgrpMsgRejectText This attribute sets the text to use for rejection notifications. OID 2.16.840.1.113730.3.1.29 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.128. mgrpNoDuplicateChecks This attribute defines whether the messaging server checks for duplicate emails. OID 2.16.840.1.113730.3.1.789 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape Messaging Server 8.129. mgrpRemoveHeader This attribute sets whether the header is removed in reply messages. OID 2.16.840.1.113730.3.1.801 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.130. mgrpRFC822MailMember This attribute identifies the member of a mail group. OID 2.16.840.1.113730.3.1.30 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.131. mobile The mobile , or mobileTelephoneNumber , contains the entry's mobile or cellular phone number. For example: mobileTelephoneNumber: 415-555-4321 OID 0.9.2342.19200300.100.1.41 Syntax TelephoneNumber Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.132. mozillaCustom1 This attribute is used by Mozilla Thunderbird to manage a shared address book. OID 1.3.6.1.4.1.13769.4.1 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.133. mozillaCustom2 This attribute is used by Mozilla Thunderbird to manage a shared address book. OID 1.3.6.1.4.1.13769.4.2 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.134. mozillaCustom3 This attribute is used by Mozilla Thunderbird to manage a shared address book. OID 1.3.6.1.4.1.13769.4.3 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.135. mozillaCustom4 This attribute is used by Mozilla Thunderbird to manage a shared address book. OID 1.3.6.1.4.1.13769.4.4 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.136. mozillaHomeCountryName This attribute sets the country used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.6 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.137. mozillaHomeLocalityName This attribute sets the city used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.3 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.138. mozillaHomePostalCode This attribute sets the postal code used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.5 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.139. mozillaHomeState This attribute sets the state or province used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.4 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.140. mozillaHomeStreet2 This attribute contains the second line of a street address used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.2 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.141. mozillaHomeStreet This attribute sets the street address used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.1 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.142. mozillaHomeUrl This attribute contains a URL used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.7 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.143. mozillaNickname This attribute contains a nickname used by Mozilla Thunderbird for a shared address book. OID 1.3.6.1.4.1.13769.2.1 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Mozilla Address Book 8.144. mozillaSecondEmail This attribute contains an alternate or secondary email address for an entry in a shared address book for Mozilla Thunderbird. OID 1.3.6.1.4.1.13769.2.2 Syntax IA5String Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.145. mozillaUseHtmlMail This attribute sets an email type preference for an entry in a shared address book in Mozilla Thunderbird. OID 1.3.6.1.4.1.13769.2.3 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.146. mozillaWorkStreet2 This attribute contains a street address for a workplace or office for an entry in Mozilla Thunderbird's shared address book. OID 1.3.6.1.4.1.13769.3.8 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.147. mozillaWorkUrl This attribute contains a URL for a work site in an entry in a shared address book in Mozilla Thunderbird. OID 1.3.6.1.4.1.13769.3.9 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 8.148. multiLineDescription This attribute contains a description of an entry which spans multiple lines in the LDIF file. OID 1.3.6.1.4.1.250.1.2 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 8.149. name The name attribute identifies the attribute supertype which can be used to form string attribute types for naming. It is unlikely that values of this type will occur in an entry. LDAP server implementations that do not support attribute subtyping do not need to recognize this attribute in requests. Client implementations should not assume that LDAP servers are capable of performing attribute subtyping. OID 2.5.4.41 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.150. netscapeReversiblePassword This attribute contains the password for HTTP Digest/MD5 authentication. OID 2.16.840.1.113730.3.1.812 Syntax OctetString Multi- or Single-Valued Multi-valued Defined in Netscape Web Server 8.151. NisMapEntry This attribute contains the information for a NIS map to be used by Network Information Services. Note This attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.27 Syntax IA5String Multi- or Single-Valued Single-valued Defined in RFC 2307 8.152. nisMapName This attribute contains the name of a mapping used by a NIS server. OID 1.3.6.1.1.1.1.26 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2307 8.153. nisNetgroupTriple This attribute contains information on a netgroup used by a NIS server. Note This attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.14 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2307 8.154. nsAccessLog This entry identifies the access log used by a server. OID nsAccessLog-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.155. nsAdminAccessAddresses This attribute contains the IP address of the Administration Server used by the instance. OID nsAdminAccessAddresses-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.156. nsAdminAccessHosts This attribute contains the host name of the Administration Server. OID nsAdminAccessHosts-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.157. nsAdminAccountInfo This attribute contains other information about the Administration Server account. OID nsAdminAccountInfo-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.158. nsAdminCacheLifetime This sets the length of time to store the cache used by Directory Server. OID nsAdminCacheLifetime-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.159. nsAdminCgiWaitPid This attribute defines the wait time for Administration Server CGI process IDs. OID nsAdminCgiWaitPid-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.160. nsAdminDomainName This attribute contains the name of the administration domain containing the Directory Server instance. OID nsAdminDomainName-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.161. nsAdminEnableEnduser This attribute sets whether to allow end user access to admin services. OID nsAdminEnableEnduser-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.162. nsAdminEndUserHTMLIndex This attribute sets whether to allow end users to access the HTML index of admin services. OID nsAdminEndUserHTMLIndex-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.163. nsAdminGroupName This attribute gives the name of the admin guide. OID nsAdminGroupName-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.164. nsAdminOneACLDir This attribute gives the directory path to the directory containing access control lists for the Administration Server. OID nsAdminOneACLDir-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.165. nsAdminSIEDN This attribute contains the DN of the serer instance entry (SIE) for the Administration Server. OID nsAdminSIEDN-oid Syntax DN Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.166. nsAdminUsers This attribute gives the path and name of the file which contains the information for the Administration Server admin user. OID nsAdminUsers-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.167. nsAIMid This attribute contains the AOL Instant Messaging user ID for the user. OID 2.16.840.1.113730.3.2.300 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.168. nsBaseDN This contains the base DN used in Directory Server's server instance definition entry. OID nsBaseDN-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.169. nsBindDN This attribute contains the bind DN defined in Directory Server SIE. OID nsBindDN-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.170. nsBindPassword This attribute contains the password used by the bind DN defined in nsBindDN . OID nsBindPassword-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.171. nsBuildNumber This defines, in Directory Server SIE, the build number of the server instance. OID nsBuildNumber-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.172. nsBuildSecurity This defines, in Directory Server SIE, the build security level. OID nsBuildSecurity-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.173. nsCertConfig This attribute defines the configuration for the Red Hat Certificate System. OID nsCertConfig-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Certificate System 8.174. nsClassname OID nsClassname-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.175. nsConfigRoot This attribute contains the root DN of the configuration directory. OID nsConfigRoot-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.176. nscpAIMScreenname This attribute gives the AIM screen name of a user. OID 1.3.6.1.4.1.13769.2.4 Syntax TelephoneString Multi- or Single-Valued Multi-valued Defined in Mozilla Address Book 8.177. nsDefaultAcceptLanguage This attribute contains the language codes which are accepted for HTML clients. OID nsDefaultAcceptLanguage-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.178. nsDefaultObjectClass This attribute stores object class information in a container entry. OID nsDefaultObjectClass-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.179. nsDeleteclassname OID nsDeleteclassname-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.180. nsDirectoryFailoverList This attribute contains a list of Directory Servers to use for failover. OID nsDirectoryFailoverList-oid Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.181. nsDirectoryInfoRef This attribute refers to a DN of an entry with information about the server. OID nsDirectoryInfoRef-oid Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.182. nsDirectoryURL This attribute contains Directory Server URL. OID nsDirectoryURL-oid Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.183. nsDisplayName This attribute contains a display name. OID nsDisplayName-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.184. nsErrorLog This attribute identifies the error log used by the server. OID nsErrorLog-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.185. nsExecRef This attribute contains the path or location of an executable which can be used to perform server tasks. OID nsExecRef-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.186. nsExpirationDate This attribute contains the expiration date of an application. OID nsExpirationDate-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.187. nsGroupRDNComponent This attribute defines the attribute to use for the RDN of a group entry. OID nsGroupRDNComponent-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.188. nsHardwarePlatform This attribute indicates the hardware on which the server is running. The value of this attribute is the same as the output from uname -m . For example: nsHardwarePlatform:i686 OID nsHardwarePlatform-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.189. nsHelpRef This attribute contains a reference to an online help file. OID nsHelpRef-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.190. nsHostLocation This attribute contains information about the server host. OID nsHostLocation-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.191. nsICQid This attribute contains an ICQ ID for the user. OID 2.16.840.1.113730.3.1.2014 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.192. nsInstalledLocation This attribute contains the installation directory for Directory Servers which are version 7.1 or older. OID nsInstalledLocation-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.193. nsJarfilename This attribute gives the jar file name used by the Console. OID nsJarfilename-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.194. nsLdapSchemaVersion This gives the version number of the LDAP directory schema. OID nsLdapSchemaVersion-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.195. nsLicensedFor The nsLicensedFor attribute identifies the server the user is licensed to use. Administration Server expects each nsLicenseUser entry to contain zero or more instances of this attribute. Valid keywords for this attribute include the following: slapd for a licensed Directory Server client. mail for a licensed mail server client. news for a licensed news server client. cal for a licensed calender server client. For example: nsLicensedFor: slapd OID 2.16.840.1.113730.3.1.36 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Administration Server 8.196. nsLicenseEndTime Reserved for future use. OID 2.16.840.1.113730.3.1.38 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Administration Server 8.197. nsLicenseStartTime Reserved for future use. OID 2.16.840.1.113730.3.1.37 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Administration Server 8.198. nsLogSuppress This attribute sets whether to suppress server logging. OID nsLogSuppress-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.199. nsmsgDisallowAccess This attribute defines access to a messaging server. OID nsmsgDisallowAccess-oid Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.200. nsmsgNumMsgQuota This attribute sets a quota for the number of messages which will be kept by the messaging server. OID nsmsgNumMsgQuota-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.201. nsMSNid This attribute contains the MSN instant messaging ID for the user. OID 2.16.840.1.113730.3.1.2016 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.202. nsNickName This attribute gives a nickname for an application. OID nsNickName-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.203. nsNYR OID nsNYR-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Administration Services 8.204. nsOsVersion This attribute contains the version number of the operating system for the host on which the server is running. OID nsOsVersion-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.205. nsPidLog OID nsPidLog-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.206. nsPreference This attribute stores the Console preference settings. OID nsPreference-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.207. nsProductName This contains the name of the product, such as Red Hat Directory Server or Administration Server. OID nsProductName-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.208. nsProductVersion This contains the version number of Directory Server. OID nsProductVersion-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.209. nsRevisionNumber This attribute contains the revision number of Directory Server or Administration Server. OID nsRevisionNumber-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.210. nsSecureServerPort This attribute contains the TLS port for Directory Server. Note This attribute does not configure the TLS port for Directory Server. This is configured in nsslapd-secureport configuration attribute in Directory Server's dse.ldif file. Configuration attributes are described in the Configuration, Command, and File Reference . OID nsSecureServerPort-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.211. nsSerialNumber This attribute contains a serial number or tracking number assigned to a specific server application, such as Red Hat Directory Server or Administration Server. OID nsSerialNumber-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.212. nsServerAddress This attribute contains the IP address of the server host on which Directory Server is running. OID nsServerAddress-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.213. nsServerCreationClassname This attribute gives the class name to use when creating a server. OID nsServerCreationClassname-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.214. nsServerID This contains the server's instance name. For example: nsServerID: slapd-example OID nsServerID-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.215. nsServerMigrationClassname This attribute contains the name of the class to use when migrating a server. OID nsServerMigrationClassname-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.216. nsServerPort This attribute contains the standard LDAP port for Directory Server. Note This attribute does not configure the standard port for Directory Server. This is configured in nsslapd-port configuration attribute in Directory Server's dse.ldif file. Configuration attributes are described in the Configuration, Command, and File Reference . OID nsServerPort-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.217. nsServerSecurity This shows whether Directory Server requires a secure TLS or SSL connection. OID nsServerSecurity-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.218. nsSNMPContact This attribute contains the contact information provided by the SNMP. OID 2.16.840.1.113730.3.1.235 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.219. nsSNMPDescription This contains a description of the SNMP service. OID 2.16.840.1.113730.3.1.236 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.220. nsSNMPEnabled This attribute shows whether SNMP is enabled for the server. OID 2.16.840.1.113730.3.1.232 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.221. nsSNMPLocation This attribute shows the location provided by the SNMP service. OID 2.16.840.1.113730.3.1.234 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.222. nsSNMPMasterHost This attribute shows the host name for the SNMP master agent. OID 2.16.840.1.113730.3.1.237 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.223. nsSNMPMasterPort This attribute shows the port number for the SNMP subagent. OID 2.16.840.1.113730.3.1.238 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.224. nsSNMPOrganization This attribute contains the organization information provided by SNMP. OID 2.16.840.1.113730.3.1.233 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.225. nsSuiteSpotUser This attribute has been obsoleted. This attribute identifies the Unix user who installed the server. OID nsSuiteSpotUser-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.226. nsTaskLabel OID nsTaskLabel-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.227. nsUniqueAttribute This sets a unique attribute for the server preferences. OID nsUniqueAttribute-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.228. nsUserIDFormat This attribute sets the format to use to generate the uid attribute from the givenname and sn attributes. OID nsUserIDFormat-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.229. nsUserRDNComponent This attribute sets the attribute type to set the RDN for user entries. OID nsUserRDNComponent-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.230. nsValueBin OID 2.16.840.1.113730.3.1.247 Syntax Binary Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 8.231. nsValueCES OID 2.16.840.1.113730.3.1.244 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 8.232. nsValueCIS OID 2.16.840.1.113730.3.1.243 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 8.233. nsValueDefault OID 2.16.840.1.113730.3.1.250 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 8.234. nsValueDescription OID 2.16.840.1.113730.3.1.252 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 8.235. nsValueDN OID 2.16.840.1.113730.3.1.248 Syntax DN Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 8.236. nsValueFlags OID 2.16.840.1.113730.3.1.251 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 8.237. nsValueHelpURL OID 2.16.840.1.113730.3.1.254 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 8.238. nsValueInt OID 2.16.840.1.113730.3.1.246 Syntax Integer Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 8.239. nsValueSyntax OID 2.16.840.1.113730.3.1.253 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 8.240. nsValueTel OID 2.16.840.1.113730.3.1.245 Syntax TelephoneString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 8.241. nsValueType OID 2.16.840.1.113730.3.1.249 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 8.242. nsVendor This contains the name of the server vendor. OID nsVendor-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 8.243. nsViewConfiguration This attribute stores the view configuration used by Console. OID nsViewConfiguration-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.244. nsViewFilter This attribute sets the attribute-value pair which is used to identify entries belonging to the view. OID 2.16.840.1.113730.3.1.3023 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Directory Server 8.245. nsWellKnownJarfiles OID nsWellKnownJarfiles-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.246. nswmExtendedUserPrefs This attribute is used to store user preferences for accounts in a messaging server. OID 2.16.840.1.113730.3.1.520 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.247. nsYIMid This attribute contains the Yahoo instant messaging user name for the user. OID 2.16.840.1.113730.3.1.2015 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 8.248. ntGroupAttributes This attribute points to a binary file which contains information about the group. For example: ntGroupAttributes:: IyEvYmluL2tzaAoKIwojIGRlZmF1bHQgdmFsdWUKIwpIPSJgaG9zdG5hb OID 2.16.840.1.113730.3.1.536 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.249. ntGroupCreateNewGroup The ntGroupCreateNewGroup attribute is used by Windows Sync to determine whether Directory Server should create new group entry when a new group is created on a Windows server. true creates the new entry; false ignores the Windows entry. OID 2.16.840.1.113730.3.1.45 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.250. ntGroupDeleteGroup The ntGroupDeleteGroup attribute is used by Windows Sync to determine whether Directory Server should delete a group entry when the group is deleted on a Windows sync peer server. true means the account is deleted; false ignores the deletion. OID 2.16.840.1.113730.3.1.46 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.251. ntGroupDomainId The ntGroupDomainID attribute contains the domain ID string for a group. ntGroupDomainId: DS HR Group OID 2.16.840.1.113730.3.1.44 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.252. ntGroupId The ntGroupId attribute points to a binary file which identifies the group. For example: ntGroupId: IOUnHNjjRgghghREgfvItrGHyuTYhjIOhTYtyHJuSDwOopKLhjGbnGFtr OID 2.16.840.1.113730.3.1.110 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.253. ntGroupType In Active Directory, there are two major types of groups: security and distribution. Security groups are most similar to groups in Directory Server, since security groups can have policies configured for access controls, resource restrictions, and other permissions. Distribution groups are for mailing distribution. These are further broken down into global and local groups. The Directory Server ntGroupType supports all four group types: The ntGroupType attribute identifies the type of Windows group. The valid values are as follows: -21483646 for global/security -21483644 for domain local/security 2 for global/distribution 4 for domain local/distribution This value is set automatically when the Windows groups are synchronized. To determine the type of group, you must manually configure it when the group gets created. By default, Directory Server groups do not have this attribute and are synchronized as global/security groups. ntGroupType: -21483646 OID 2.16.840.1.113730.3.1.47 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.254. ntUniqueId The ntUniqueId attribute contains a generated number used for internal server identification and operation. For example: ntUniqueId: 352562404224a44ab040df02e4ef500b OID 2.16.840.1.113730.3.1.111 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.255. ntUserAcctExpires This attribute indicates when the entry's Windows account will expire. This value is stored as a string in GMT format. For example: ntUserAcctExpires: 20081015203415 OID 2.16.840.1.113730.3.1.528 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.256. ntUserAuthFlags This attribute contains authorization flags set for the Windows account. OID 2.16.840.1.113730.3.1.60 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.257. ntUserBadPwCount This attribute sets the number of bad password failures are allowed before an account is locked. OID 2.16.840.1.113730.3.1.531 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.258. ntUserCodePage The ntUserCodePage attribute contains the code page for the user's language of choice. For example: ntUserCodePage: AAAAAA== OID 2.16.840.1.113730.3.1.533 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.259. ntUserComment This attribute contains a text description or note about the user entry. OID 2.16.840.1.113730.3.1.522 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.260. ntUserCountryCode This attribute contains the two-character country code for the country where the user is located. OID 2.16.840.1.113730.3.1.532 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.261. ntUserCreateNewAccount The ntUserCreateNewAccount attribute is used by Windows Sync to determine whether Directory Server should create a new user entry when a new user is created on a Windows server. true creates the new entry; false ignores the Windows entry. OID 2.16.840.1.113730.3.1.42 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.262. ntUserDeleteAccount The ntUserDeleteAccount attribute IS Used by Windows Sync to determine whether a Directory Server entry will be automatically deleted when the user is deleted from the Windows sync peer server. true means the user entry is deleted; false ignores the deletion. OID 2.16.840.1.113730.3.1.43 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.263. ntUserDomainId The ntUserDomainId attribute contains the Windows domain login ID. For example: ntUserDomainId: jsmith OID 2.16.840.1.113730.3.1.41 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.264. ntUserFlags This attribute contains additional flags set for the Windows account. OID 2.16.840.1.113730.3.1.523 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.265. ntUserHomeDir The ntUserHomeDir attribute contains an ASCII string representing the Windows user's home directory. This attribute can be null. For example: ntUserHomeDir: c:\jsmith OID 2.16.840.1.113730.3.1.521 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.266. ntUserHomeDirDrive This attribute contains information about the drive on which the user's home directory is stored. OID 2.16.840.1.113730.3.1.535 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.267. ntUserLastLogoff The ntUserLastLogoff attribute contains the time of the last logoff. This value is stored as a string in GMT format. If security logging is turned on, then this attribute is updated on synchronization only if some other aspect of the user's entry has changed. ntUserLastLogoff: 20201015203415Z OID 2.16.840.1.113730.3.1.527 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.268. ntUserLastLogon The ntUserLastLogon attribute contains the time that the user last logged into the Windows domain. This value is stored as a string in GMT format. If security logging is turned on, then this attribute is updated on synchronization only if some other aspect of the user's entry has changed. ntUserLastLogon: 20201015203415Z OID 2.16.840.1.113730.3.1.526 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.269. ntUserLogonHours The ntUserLogonHours attribute contains the time periods that a user is allowed to log onto the Active Directory domain. This attribute corresponds to the logonHours attribute in Active Directory. OID 2.16.840.1.113730.3.1.530 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.270. ntUserLogonServer The ntUserLogonServer attribute defines the Active Directory server to which the user's logon request is forwarded. OID 2.16.840.1.113730.3.1.65 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.271. ntUserMaxStorage The ntUserMaxStorage attribute contains the maximum amount of disk space available for the user. ntUserMaxStorage: 4294967295 OID 2.16.840.1.113730.3.1.529 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.272. ntUserNumLogons This attribute shows the number of successful logons to the Active Directory domain for the user. OID 2.16.840.1.113730.3.1.64 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.273. ntUserParms The ntUserParms attribute contains a Unicode string reserved for use by applications. OID 2.16.840.1.113730.3.1.62 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.274. ntUserPasswordExpired This attribute shows whether the password for the Active Directory account has expired. OID 2.16.840.1.113730.3.1.68 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.275. ntUserPrimaryGroupId The ntUserPrimaryGroupId attribute contains the group ID of the primary group to which the user belongs. OID 2.16.840.1.113730.3.1.534 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.276. ntUserPriv This attribute shows the type of privileges allowed for the user. OID 2.16.840.1.113730.3.1.59 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.277. ntUserProfile The ntUserProfile attribute contains the path to a user's profile. For example: ntUserProfile: c:\jsmith\profile.txt OID 2.16.840.1.113730.3.1.67 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.278. ntUserScriptPath The ntUserScriptPath attribute contains the path to an ASCII script used by the user to log into the domain. ntUserScriptPath: c:\jstorm\lscript.bat OID 2.16.840.1.113730.3.1.524 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.279. ntUserUniqueId The ntUserUniqueId attribute contains a unique numeric ID for the Windows user. OID 2.16.840.1.113730.3.1.66 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.280. ntUserUnitsPerWeek The ntUserUnitsPerWeek attribute contains the total amount of time that the user has spent logged into the Active Directory domain. OID 2.16.840.1.113730.3.1.63 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.281. ntUserUsrComment The ntUserUsrComment attribute contains additional comments about the user. OID 2.16.840.1.113730.3.1.61 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.282. ntUserWorkstations The ntUserWorkstations attribute contains a list of names, in ASCII strings, of work stations which the user is allowed to log in to. There can be up to eight work stations listed, separated by commas. Specify null to permit users to log on from any workstation. For example: ntUserWorkstations: firefly OID 2.16.840.1.113730.3.1.525 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 8.283. o The organizationName , or o , attribute contains the organization name. For example: organizationName: Example Corporation o: Example Corporation OID 2.5.4.10 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.284. objectClass The objectClass attribute identifies the object classes used for an entry. For example: objectClass: person OID 2.5.4.0 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.285. objectClasses This attribute is used in a schema file to identify an object class allowed by the subschema definition. OID 2.5.21.6 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 8.286. obsoletedByDocument The obsoletedByDocument attribute contains the distinguished name of a document which obsoletes the current document entry. OID 0.9.2342.19200300.102.1.4 Syntax DN Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 8.287. obsoletesDocument The obsoletesDocument attribute contains the distinguished name of a documented which is obsoleted by the current document entry. OID 0.9.2342.19200300.102.1.3 Syntax DN Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 8.288. oncRpcNumber The oncRpcNumber attribute contains part of the RPC map and stores the RPC number for UNIX RPCs. Note The oncRpcNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.18 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 8.289. organizationalStatus The organizationalStatus identifies the person's category within an organization. organizationalStatus: researcher OID 0.9.2342.19200300.100.1.45 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.290. otherMailbox The otherMailbox attribute contains values for email types other than X.400 and RFC 822. otherMailbox: internet USD [email protected] OID 0.9.2342.19200300.100.1.22 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.291. ou The organizationalUnitName , or ou , contains the name of an organizational division or a subtree within the directory hierarchy. organizationalUnitName: Marketing ou: Marketing OID 2.5.4.11 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.292. owner The owner attribute contains the DN of the person responsible for an entry. For example: owner: cn=John Smith,ou=people,dc=example,dc=com OID 2.5.4.32 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.293. pager The pagerTelephoneNumber , or pager , attribute contains a person's pager phone number. pagerTelephoneNumber: 415-555-6789 pager: 415-555-6789 OID 0.9.2342.19200300.100.1.42 Syntax TelephoneNumber Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.294. parentOrganization The parentOrganization attribute identifies the parent organization of an organization or organizational unit. OID 1.3.6.1.4.1.1466.101.120.41 Syntax DN Multi- or Single-Valued Single-valued Defined in Netscape 8.295. personalSignature The personalSignature attribute contains the entry's signature file, in binary format. personalSignature:: AAAAAA== OID 0.9.2342.19200300.100.1.53 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.296. personalTitle The personalTitle attribute contains a person's honorific, such as Ms. , Dr. , Prof. , and Rev. personalTitle: Mr. OID 0.9.2342.19200300.100.1.40 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.297. photo The photo attribute contains a photo file, in a binary format. photo:: AAAAAA== OID 0.9.2342.19200300.100.1.7 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.298. physicalDeliveryOfficeName The physicalDeliveryOffice contains the city or town in which a physical postal delivery office is located. physicalDeliveryOfficeName: Raleigh OID 2.5.4.19 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.299. postalAddress The postalAddress attribute identifies the entry's mailing address. This field is intended to include multiple lines. When represented in LDIF format, each line should be separated by a dollar sign (USD). To represent an actual dollar sign (USD) or backslash (\) within the entry text, use the escaped hex values \24 and \5c respectively. For example, to represent the string: The dollar (USD) value can be found in the c:\cost file. provide the string: The dollar (\24) value can be foundUSDin the c:\5ccost file. OID 2.5.4.16 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.300. postalCode The postalCode contains the zip code for an entry located within the United States. postalCode: 44224 OID 2.5.4.17 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.301. postOfficeBox The postOfficeBox attribute contains the postal address number or post office box number for an entry's physical mailing address. postOfficeBox: 1234 OID 2.5.4.18 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.302. preferredDeliveryMethod The preferredDeliveryMethod contains an entry's preferred contact or delivery method. For example: preferredDeliveryMethod: telephone OID 2.5.4.28 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.303. preferredLanguage The preferredLanguage attribute contains a person's preferred written or spoken language. The value should conform to the syntax for HTTP Accept-Language header values. OID 2.16.840.1.113730.3.1.39 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2798 8.304. preferredLocale A locale identifies language-specific information about how users of a specific region, culture, or custom expect data to be presented, including how data of a given language is interpreted and how data is to be sorted. Directory Server supports three locales for American English, Japanese, and German. The preferredLocale attribute sets which locale is preferred by a user. OID 1.3.6.1.4.1.1466.101.120.42 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape 8.305. preferredTimeZone The preferredTimeZone attribute sets the time zone to use for the user entry. OID 1.3.6.1.4.1.1466.101.120.43 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape 8.306. presentationAddress The presentationAddress attribute contains the OSI presentation address for an entry. This attribute includes the OSI Network Address and up to three selectors, one each for use by the transport, session, and presentation entities. For example: presentationAddress: TELEX+00726322+RFC-1006+02+130.59.2.1 OID 2.5.4.29 Syntax IA5String Multi- or Single-Valued Single-valued Defined in RFC 2256 8.307. protocolInformation The protocolInformation attribute, used together with the presentationAddress attribute, provides additional information about the OSO network service. OID 2.5.4.48 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.308. pwdReset When an administrator changes the password of a user, Directory Server sets the pwdReset operational attribute in the user's entry to true . Applications can use this attribute to identify if a password of a user has been reset by an administrator. Note The pwdReset attribute is an operational attribute and, therefore, users cannot edit it. OID 1.3.6.1.4.1.1466.115.121.1.7 Syntax Boolean Multi- or Single-Valued Single-valued Defined in RFC draft-behera-ldap-password-policy 8.309. ref The ref attribute is used to support LDAPv3 smart referrals. The value of this attribute is an LDAP URL: ldap: pass:quotes[ host_name ]:pass:quotes[ port_number ]/pass:quotes[ subtree_dn ] The port number is optional. For example: ref: ldap://server.example.com:389/ou=People,dc=example,dc=com OID 2.16.840.1.113730.3.1.34 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in LDAPv3 Referrals Internet Draft 8.310. registeredAddress This attribute contains a postal address for receiving telegrams or expedited documents. The recipient's signature is usually required on delivery. OID 2.5.4.26 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.311. roleOccupant This attribute contains the distinguished name of the person acting in the role defined in the organizationalRole entry. roleOccupant: uid=bjensen,dc=example,dc=com OID 2.5.4.33 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.312. roomNumber This attribute specifies the room number of an object. The cn attribute should be used for naming room objects. roomNumber: 230 OID 0.9.2342.19200300.100.1.6 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.313. searchGuide The searchGuide attribute specifies information for suggested search criteria when using the entry as the base object in the directory tree for a search operation. When constructing search filters, use the enhancedSearchGuide attribute instead. OID 2.5.4.14 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.314. secretary The secretary attribute identifies an entry's secretary or administrative assistant. secretary: cn=John Smith,dc=example,dc=com OID 0.9.2342.19200300.100.1.21 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.315. seeAlso The seeAlso attribute identifies another Directory Server entry that may contain information related to this entry. seeAlso: cn=Quality Control Inspectors,ou=manufacturing,dc=example,dc=com OID 2.5.4.34 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.316. serialNumber The serialNumber attribute contains the serial number of a device. serialNumber: 555-1234-AZ OID 2.5.4.5 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.317. serverHostName The serverHostName attribute contains the host name of the server on which Directory Server is running. OID 2.16.840.1.113730.3.1.76 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Red Hat Administration Services 8.318. serverProductName The serverProductName attribute contains the name of the server product. OID 2.16.840.1.113730.3.1.71 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Red Hat Administration Services 8.319. serverRoot This attribute is obsolete. This attribute shows the installation directory (server root) of Directory Servers version 7.1 or older. OID 2.16.840.1.113730.3.1.70 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 8.320. serverVersionNumber The serverVersionNumber attribute contains the server version number. OID 2.16.840.1.113730.3.1.72 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Red Hat Administration Services 8.321. shadowExpire The shadowExpire attribute contains the date that the shadow account expires. The format of the date is in the number days since EPOCH, in UTC. To calculate this on the system, run a command like the following, using -d for the current date and -u to specify UTC: USD echo date -u -d 20100108 +%s /24/60/60 |bc 14617 The result (14617 in the example) is then the value of shadowExpire . shadowExpire: 14617 Note The shadowExpire attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.10 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 8.322. shadowFlag The shadowFlag attribute identifies what area in the shadow map stores the flag values. shadowFlag: 150 Note The shadowFlag attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.11 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 8.323. shadowInactive The shadowInactive attribute sets how long, in days, the shadow account can be inactive. shadowInactive: 15 Note The shadowInactive attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.9 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 8.324. shadowLastChange The shadowLastChange attribute contains the number of days between January 1, 1970 and the day when the user password was last set. For example, if an account's password was last set on Nov 4, 2016, the shadowLastChange attribute is set to 0 The following exceptions are existing: When the passwordMustChange parameter is enabled in the cn=config entry, new accounts have 0 set in the shadowLastChange attribute. When you create an account without password, the shadowLastChange attribute is not added. The shadowLastChange attribute is automatically updated for accounts synchronized from Active Directory. Note The shadowLastChange attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.5 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 8.325. shadowMax The shadowMax attribute sets the maximum number of days that a shadow password is valid. shadowMax: 10 Note The shadowMax attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.7 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 8.326. shadowMin The shadowMin attribute sets the minimum number of days that must pass between changing the shadow password. shadowMin: 3 Note The shadowMin attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.6 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 8.327. shadowWarning The shadowWarning attribute sets how may days in advance of password expiration to send a warning to the user. shadowWarning: 2 Note The shadowWarning attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.8 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 8.328. singleLevelQuality The singleLevelQuality specifies the purported data quality at the level immediately below in the directory tree. OID 0.9.2342.19200300.100.1.50 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 1274 8.329. sn The surname , or sn , attribute contains an entry's surname , also called a last name or family name. surname: Jensen sn: Jensen OID 2.5.4.4 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.330. st The stateOrProvinceName , or st , attributes contains the entry's state or province. stateOrProvinceName: California st: California OID 2.5.4.8 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.331. street The streetAddress , or street , attribute contains an entry's street name and residential address. streetAddress: 1234 Ridgeway Drive street: 1234 Ridgeway Drive OID 2.5.4.9 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.332. subject The subject attribute contains information about the subject matter of the document entry. subject: employee option grants OID 0.9.2342.19200300.102.1.8 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 8.333. subtreeMaximumQuality The subtreeMaximumQuality attribute specifies the purported maximum data quality for a directory subtree. OID 0.9.2342.19200300.100.1.52 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 1274 8.334. subtreeMinimumQuality The subtreeMinimumQuality specifies the purported minimum data quality for a directory subtree. OID 0.9.2342.19200300.100.1.51 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 1274 8.335. supportedAlgorithms The supportedAlgorithms attribute contains algorithms which are requested and stored in a binary form, such as supportedAlgorithms;binary . supportedAlgorithms:: AAAAAA== OID 2.5.4.52 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.336. supportedApplicationContext This attribute contains the identifiers of OSI application contexts. OID 2.5.4.30 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.337. telephoneNumber The telephoneNumber contains an entry's phone number. For example: telephoneNumber: 415-555-2233 OID 2.5.4.20 Syntax TelephoneNumber Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.338. teletexTerminalIdentifier The teletexTerminalIdentifier attribute contains an entry's teletex terminal identifier. The first printable string in the example is the encoding of the first portion of the teletex terminal identifier to be encoded, and the subsequent 0 or more octet strings are subsequent portions of the teletex terminal identifier: teletex-id = ttx-term 0*("USD" ttx-param) ttx-term = printablestring ttx-param = ttx-key ":" ttx-value ttx-key = "graphic" / "control" / "misc" / "page" / "private" ttx-value = octetstring OID 2.5.4.22 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.339. telexNumber This attribute defines the telex number of the entry. The format of the telex number is as follows: actual-number "USD" country "USD" answerback actual-number is the syntactic representation of the number portion of the telex number being encoded. country is the TELEX country code. answerback is the answerback code of a TELEX terminal. OID 2.5.4.21 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.340. title The title attribute contains a person's title within the organization. title: Senior QC Inspector OID 2.5.4.12 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.341. ttl The TimeToLive , or ttl , attribute contains the time, in seconds, that cached information about an entry should be considered valid. Once the specified time has elapsed, the information is considered out of date. A value of zero ( 0 ) indicates that the entry should not be cached. TimeToLive: 120 ttl: 120 OID 1.3.6.1.4.250.1.60 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in LDAP Caching Internet Draft 8.342. uid The userID , more commonly uid , attribute contains the entry's unique user name. userID: jsmith uid: jsmith OID 0.9.2342.19200300.100.1.1 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.343. uidNumber The uidNumber attribute contains a unique numeric identifier for a user entry. This is analogous to the user number in Unix. uidNumber: 120 Note The uidNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.0 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 8.344. uniqueIdentifier This attribute identifies a specific item used to distinguish between two entries when a distinguished name has been reused. This attribute is intended to detect any instance of a reference to a distinguished name that has been deleted. This attribute is assigned by the server. uniqueIdentifier:: AAAAAA== OID 0.9.2342.19200300.100.1.44 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.345. uniqueMember The uniqueMember attribute identifies a group of names associated with an entry where each name was given a uniqueIdentifier to ensure its uniqueness. A value for the uniqueMember attribute is a DN followed by the uniqueIdentifier . OID 2.5.4.50 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.346. updatedByDocument The updatedByDocument attribute contains the distinguished name of a document that is an updated version of the document entry. OID 0.9.2342.19200300.102.1.6 Syntax DN Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 8.347. updatesDocument The updatesDocument attribute contains the distinguished name of a document for which this document is an updated version. OID 0.9.2342.19200300.102.1.5 Syntax DN Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 8.348. userCertificate This attribute is stored and requested in the binary form, as userCertificate;binary . userCertificate;binary:: AAAAAA== OID 2.5.4.36 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.349. userClass This attribute specifies a category of computer user. The semantics of this attribute are arbitrary. The organizationalStatus attribute makes no distinction between computer users and other types of users users and may be more applicable. userClass: intern OID 0.9.2342.19200300.100.1.8 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 8.350. userPassword This attribute identifies the entry's password and encryption method in the format {encryption method}encrypted password . For example: userPassword: {sha}FTSLQhxXpA05 Transferring cleartext passwords is strongly discouraged where the underlying transport service cannot guarantee confidentiality. Transferring in cleartext may result in disclosure of the password to unauthorized parties. OID 2.5.4.35 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.351. userPKCS12 This attribute provides a format for the exchange of personal identity information. The attribute is stored and requested in binary form, as userPKCS12;binary . The attribute values are PFX PDUs stored as binary data. OID 2.16.840.1.113730.3.1.216 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2798 8.352. userSMIMECertificate The userSMIMECertificate attribute contains certificates which can be used by mail clients for S/MIME. This attribute requests and stores data in a binary format. For example: userSMIMECertificate;binary:: AAAAAA== OID 2.16.840.1.113730.3.1.40 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2798 8.353. vacationEndDate This attribute shows the ending date of the user's vacation period. OID 2.16.840.1.113730.3.1.708 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.354. vacationStartDate This attribute shows the start date of the user's vacation period. OID 2.16.840.1.113730.3.1.707 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 8.355. x121Address The x121Address attribute contains a user's X.121 address. OID 2.5.4.24 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2256 8.356. x500UniqueIdentifier Reserved for future use. An X.500 identifier is a binary method of identification useful for differentiating objects when a distinguished name has been reused. x500UniqueIdentifier:: AAAAAA== OID 2.5.4.45 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256
[ "aliasedObjectName: uid=jdoe,ou=people,dc=example,dc=com", "associatedDomain:US", "associatedName: c=us", "audio:: AAAAAA==", "authorCn: John Smith", "authorityrevocationlist;binary:: AAAAAA==", "authorSn: Smith", "buildingName: 14", "businessCategory: Engineering", "cACertificate;binary:: AAAAAA==", "countryName: GB c: US", "carLicense: 6ABC246", "certificateRevocationList;binary:: AAAAAA==", "commonName: John Smith cn: Bill Anderson", "cn: replicater.example.com:17430/dc%3Dexample%2Cdc%3com", "friendlyCountryName: Ireland co: Ireland", "crossCertificatePair;binary:: AAAAAA==", "dc: example domainComponent: example", "departmentNumber: 2604", "description: Quality control inspector for the ME2873 product line.", "destinationIndicator: Stow, Ohio, USA", "displayName: John Smith", "dITRedirect: cn=jsmith,dc=example,dc=com", "dn: uid=Barbara Jensen,ou=Quality Control,dc=example,dc=com", "dNSRecord: IN NS ns.uu.net", "documentAuthor: uid=Barbara Jensen,ou=People,dc=example,dc=com", "documentIdentifier: L3204REV1", "documentLocation: Department Library", "documentPublisher: Southeastern Publishing", "documentTitle: Installing Red Hat Directory Server", "documentVersion: 1.1", "favouriteDrink: iced tea drink: cranberry juice", "dSAQuality: high", "employeeNumber: 3441", "employeeType: Full time", "enhancedSearchGuide: (uid=bjensen)", "facsimileTelephoneNumber: +1 415 555 1212 fax: +1 415 555 1212", "gecos: John Smith", "generationQualifier:III", "gidNumber: 100", "givenName: Rachel", "homeDirectory: /home/jsmith", "homePhone: 415-555-1234", "homePostalAddress: 1234 Ridgeway DriveUSDSanta Clara, CAUSD99555", "The dollar (USD) value can be found in the c:\\cost file.", "The dollar (\\24) value can be foundUSDin the c:\\c5cost file.", "host: labcontroller01", "houseIdentifier: B105", "info: not valid", "initials: BAJ", "jpegPhoto:: AAAAAA==", "keyWords: directory LDAP X.500", "labeledURI: http://home.example.com labeledURI: http://home.example.com Example website", "localityName: Santa Clara l: Santa Clara", "loginShell: c:\\scripts\\jsmith.bat", "mail: [email protected]", "mailAlternateAddress: [email protected] mailAlternateAddress: [email protected]", "mailHost: mail.example.com", "mailPreferenceOption: 0", "manager: cn=Bill Andersen,ou=Quality Control,dc=example,dc=com", "member: cn=John Smith,dc=example,dc=com", "memberCertificateDescription: {ou=x,ou=A,dc=company,dc=example}", "memberUID: jsmith", "memberURL: ldap://cn=jsmith,ou=people,dc=example,dc=com", "mepMappedAttr: gidNumber: USDgidNumber", "mepMappedAttr: cn: Managed Group for USDcn", "mepStaticAttr: posixGroup", "mobileTelephoneNumber: 415-555-4321", "nsHardwarePlatform:i686", "nsLicensedFor: slapd", "nsServerID: slapd-example", "ntGroupAttributes:: IyEvYmluL2tzaAoKIwojIGRlZmF1bHQgdmFsdWUKIwpIPSJgaG9zdG5hb", "ntGroupDomainId: DS HR Group", "ntGroupId: IOUnHNjjRgghghREgfvItrGHyuTYhjIOhTYtyHJuSDwOopKLhjGbnGFtr", "ntGroupType: -21483646", "ntUniqueId: 352562404224a44ab040df02e4ef500b", "ntUserAcctExpires: 20081015203415", "ntUserCodePage: AAAAAA==", "ntUserDomainId: jsmith", "ntUserHomeDir: c:\\jsmith", "ntUserLastLogoff: 20201015203415Z", "ntUserLastLogon: 20201015203415Z", "ntUserMaxStorage: 4294967295", "ntUserProfile: c:\\jsmith\\profile.txt", "ntUserScriptPath: c:\\jstorm\\lscript.bat", "ntUserWorkstations: firefly", "organizationName: Example Corporation o: Example Corporation", "objectClass: person", "organizationalStatus: researcher", "otherMailbox: internet USD [email protected]", "organizationalUnitName: Marketing ou: Marketing", "owner: cn=John Smith,ou=people,dc=example,dc=com", "pagerTelephoneNumber: 415-555-6789 pager: 415-555-6789", "personalSignature:: AAAAAA==", "personalTitle: Mr.", "photo:: AAAAAA==", "physicalDeliveryOfficeName: Raleigh", "The dollar (USD) value can be found in the c:\\cost file.", "The dollar (\\24) value can be foundUSDin the c:\\5ccost file.", "postalCode: 44224", "postOfficeBox: 1234", "preferredDeliveryMethod: telephone", "presentationAddress: TELEX+00726322+RFC-1006+02+130.59.2.1", "ldap: pass:quotes[ host_name ]:pass:quotes[ port_number ]/pass:quotes[ subtree_dn ]", "ref: ldap://server.example.com:389/ou=People,dc=example,dc=com", "roleOccupant: uid=bjensen,dc=example,dc=com", "roomNumber: 230", "secretary: cn=John Smith,dc=example,dc=com", "seeAlso: cn=Quality Control Inspectors,ou=manufacturing,dc=example,dc=com", "serialNumber: 555-1234-AZ", "echo date -u -d 20100108 +%s /24/60/60 |bc 14617", "shadowExpire: 14617", "shadowFlag: 150", "shadowInactive: 15", "shadowMax: 10", "shadowMin: 3", "shadowWarning: 2", "surname: Jensen sn: Jensen", "stateOrProvinceName: California st: California", "streetAddress: 1234 Ridgeway Drive street: 1234 Ridgeway Drive", "subject: employee option grants", "supportedAlgorithms:: AAAAAA==", "telephoneNumber: 415-555-2233", "teletex-id = ttx-term 0*(\"USD\" ttx-param) ttx-term = printablestring ttx-param = ttx-key \":\" ttx-value ttx-key = \"graphic\" / \"control\" / \"misc\" / \"page\" / \"private\" ttx-value = octetstring", "actual-number \"USD\" country \"USD\" answerback", "title: Senior QC Inspector", "TimeToLive: 120 ttl: 120", "userID: jsmith uid: jsmith", "uidNumber: 120", "uniqueIdentifier:: AAAAAA==", "userCertificate;binary:: AAAAAA==", "userClass: intern", "userPassword: {sha}FTSLQhxXpA05", "userSMIMECertificate;binary:: AAAAAA==", "x500UniqueIdentifier:: AAAAAA==" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuration_and_schema_reference/assembly_entry-attribute-reference_config-schema-reference-title
Chapter 2. Installation
Chapter 2. Installation This chapter guides you through the steps to install Red Hat build of Apache Qpid Proton DotNet in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To use Red Hat build of Apache Qpid Proton DotNet on Red Hat Enterprise Linux, you must install the the .NET 6.0 developer tools. For information, see the Getting started with .NET on RHEL 9 and Getting started with .NET on RHEL 8 . To build programs using Red Hat build of Apache Qpid Proton DotNet on Microsoft Windows, you must install Visual Studio. 2.2. Installing on Red Hat Enterprise Linux Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at https://access.redhat.com/downloads Locate the Red Hat AMQ Clients entry in the INTEGRATION AND AUTOMATION category. Click Red Hat AMQ Clients . The Software Downloads page opens. Download the amq-qpid-dotnet-1.0.0.M9 .zip file. Use the unzip command to extract the file contents into a directory of your choosing. USD unzip amq-qpid-dotnet-1.0.0.M9.zip When you extract the contents of the .zip file, a directory named amq-dotnet-1.0.0-M9 is created. This is the top-level directory of the installation and is referred to as <install-dir> throughout this document. Use a text editor to create the file USDHOME/.nuget/NuGet/NuGet.Config and add the following content: <?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3"/> <add key="amq-clients" value=" <install-dir> /lib"/> </packageSources> </configuration> If you already have a NuGet.Config file, add the amq-clients line to it. Alternatively, you can move the .nupkg file inside the nupkg directory to an existing package source location. 2.3. Installing on Microsoft Windows Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at https://access.redhat.com/downloads Locate the Red Hat AMQ Clients entry in the INTEGRATION AND AUTOMATION category. Click Red Hat AMQ Clients . The Software Downloads page opens. Download the amq-qpid-proton-dotnet-1.0.0-M9 .zip file. Extract the file contents into a directory of your choosing by right-clicking on the zip file and selecting Extract All . When you extract the contents of the .zip file, a directory named amq-dotnet-1.0.0-M9 is created. This is the top-level directory of the installation and is referred to as <install-dir> throughout this document. 2.4. Adding the client to your .NET application Using the dotnet CLI you can add a reference to the Red Hat build of Apache Qpid Proton DotNet client to your application which will also download release binaries from the Nuget gallery. The following command should be run (with the appropriate version updated) in the location where your project file is saved. dotnet add package Apache.Qpid.Proton.Client --version 1.0.0-M9 Following this command your csproj file should be updated to contain a reference to to the proton-dotnet client library and should look similar to the following example: <ItemGroup> <PackageReference Include="Apache.Qpid.Proton.Client" Version="1.0.0-M9" /> </ItemGroup> Users can manually add this reference as well and use the dotnet restore command to fetch the artifacts from the Nuget gallery. 2.5. Installing the examples Use the git clone command to clone the source repository to a local directory named qpid-proton-dotnet : USD git clone https://github.com/apache/qpid-proton-dotnet.git Change to the qpid-proton-dotnet directory and use the git checkout command to checkout the commit associated with this release: USD cd qpid-proton-dotnet USD git checkout 1.0.0-M9 The resulting local directory is referred to as <source-dir> in this guide.
[ "unzip amq-qpid-dotnet-1.0.0.M9.zip", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <configuration> <packageSources> <add key=\"nuget.org\" value=\"https://api.nuget.org/v3/index.json\" protocolVersion=\"3\"/> <add key=\"amq-clients\" value=\" <install-dir> /lib\"/> </packageSources> </configuration>", "dotnet add package Apache.Qpid.Proton.Client --version 1.0.0-M9", "<ItemGroup> <PackageReference Include=\"Apache.Qpid.Proton.Client\" Version=\"1.0.0-M9\" /> </ItemGroup>", "git clone https://github.com/apache/qpid-proton-dotnet.git", "cd qpid-proton-dotnet git checkout 1.0.0-M9" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_proton_dotnet/1.0/html/using_qpid_proton_dotnet/installation
Chapter 4. Configuring user workload monitoring
Chapter 4. Configuring user workload monitoring 4.1. Preparing to configure the user workload monitoring stack This section explains which user-defined monitoring components can be configured, how to enable user workload monitoring, and how to prepare for configuring the user workload monitoring stack. Important Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in the Config map reference for the Cluster Monitoring Operator are supported for configuration. The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources. 4.1.1. Configurable monitoring components This table shows the monitoring components you can configure and the keys used to specify the components in the user-workload-monitoring-config config map. Table 4.1. Configurable monitoring components for user-defined projects Component user-workload-monitoring-config config map key Prometheus Operator prometheusOperator Prometheus prometheus Alertmanager alertmanager Thanos Ruler thanosRuler Warning Different configuration changes to the ConfigMap object result in different outcomes: The pods are not redeployed. Therefore, there is no service outage. The affected pods are redeployed: For single-node clusters, this results in temporary service outage. For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available. Configuring and resizing a persistent volume always results in a service outage, regardless of high availability. Each procedure that requires a change in the config map includes its expected outcome. 4.1.2. Enabling monitoring for user-defined projects In OpenShift Container Platform, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can monitor your own projects in OpenShift Container Platform without the need for an additional monitoring solution. Using this feature centralizes monitoring for core platform components and user-defined projects. Note Versions of Prometheus Operator installed using Operator Lifecycle Manager (OLM) are not compatible with user-defined monitoring. Therefore, custom Prometheus instances installed as a Prometheus custom resource (CR) managed by the OLM Prometheus Operator are not supported in OpenShift Container Platform. 4.1.2.1. Enabling monitoring for user-defined projects Cluster administrators can enable monitoring for user-defined projects by setting the enableUserWorkload: true field in the cluster monitoring ConfigMap object. Important You must remove any custom Prometheus instances before enabling monitoring for user-defined projects. Note You must have access to the cluster as a user with the cluster-admin cluster role to enable monitoring for user-defined projects in OpenShift Container Platform. Cluster administrators can then optionally grant users permission to configure the components that are responsible for monitoring user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have created the cluster-monitoring-config ConfigMap object. You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. Note Every time you save configuration changes to the user-workload-monitoring-config ConfigMap object, the pods in the openshift-user-workload-monitoring project are redeployed. It might sometimes take a while for these components to redeploy. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enableUserWorkload: true under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1 1 When set to true , the enableUserWorkload parameter enables monitoring for user-defined projects in a cluster. Save the file to apply the changes. Monitoring for user-defined projects is then enabled automatically. Note If you enable monitoring for user-defined projects, the user-workload-monitoring-config ConfigMap object is created by default. Verify that the prometheus-operator , prometheus-user-workload , and thanos-ruler-user-workload pods are running in the openshift-user-workload-monitoring project. It might take a short while for the pods to start: USD oc -n openshift-user-workload-monitoring get pod Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h Additional resources User workload monitoring first steps 4.1.2.2. Granting users permission to configure monitoring for user-defined projects As a cluster administrator, you can assign the user-workload-monitoring-config-edit role to a user. This grants permission to configure and manage monitoring for user-defined projects without giving them permission to configure and manage core OpenShift Container Platform monitoring components. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring adm policy add-role-to-user \ user-workload-monitoring-config-edit <user> \ --role-namespace openshift-user-workload-monitoring Verify that the user is correctly assigned to the user-workload-monitoring-config-edit role by displaying the related role binding: USD oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring Example command USD oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring Example output Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1 1 In this example, user1 is assigned to the user-workload-monitoring-config-edit role. 4.1.3. Enabling alert routing for user-defined projects In OpenShift Container Platform, an administrator can enable alert routing for user-defined projects. This process consists of the following steps: Enable alert routing for user-defined projects: Use the default platform Alertmanager instance. Use a separate Alertmanager instance only for user-defined projects. Grant users permission to configure alert routing for user-defined projects. After you complete these steps, developers and other users can configure custom alerts and alert routing for their user-defined projects. Additional resources Understanding alert routing for user-defined projects 4.1.3.1. Enabling the platform Alertmanager instance for user-defined alert routing You can allow users to create user-defined alert routing configurations that use the main platform instance of Alertmanager. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enableUserAlertmanagerConfig: true in the alertmanagerMain section under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # ... alertmanagerMain: enableUserAlertmanagerConfig: true 1 # ... 1 Set the enableUserAlertmanagerConfig value to true to allow users to create user-defined alert routing configurations that use the main platform instance of Alertmanager. Save the file to apply the changes. The new configuration is applied automatically. 4.1.3.2. Enabling a separate Alertmanager instance for user-defined alert routing In some clusters, you might want to deploy a dedicated Alertmanager instance for user-defined projects, which can help reduce the load on the default platform Alertmanager instance and can better separate user-defined alerts from default platform alerts. In these cases, you can optionally enable a separate instance of Alertmanager to send alerts for user-defined projects only. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add enabled: true and enableAlertmanagerConfig: true in the alertmanager section under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2 1 Set the enabled value to true to enable a dedicated instance of the Alertmanager for user-defined projects in a cluster. Set the value to false or omit the key entirely to disable the Alertmanager for user-defined projects. If you set this value to false or if the key is omitted, user-defined alerts are routed to the default platform Alertmanager instance. 2 Set the enableAlertmanagerConfig value to true to enable users to define their own alert routing configurations with AlertmanagerConfig objects. Save the file to apply the changes. The dedicated instance of Alertmanager for user-defined projects starts automatically. Verification Verify that the user-workload Alertmanager instance has started: # oc -n openshift-user-workload-monitoring get alertmanager Example output NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s 4.1.3.3. Granting users permission to configure alert routing for user-defined projects You can grant users permission to configure alert routing for user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have enabled monitoring for user-defined projects. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign the alert-routing-edit cluster role to a user in the user-defined project: USD oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1 1 For <namespace> , substitute the namespace for the user-defined project, such as ns1 . For <user> , substitute the username for the account to which you want to assign the role. Additional resources Configuring alert notifications 4.1.4. Granting users permissions for monitoring for user-defined projects As a cluster administrator, you can monitor all core OpenShift Container Platform and user-defined projects. You can also grant developers and other users different permissions: Monitoring user-defined projects Configuring the components that monitor user-defined projects Configuring alert routing for user-defined projects Managing alerts and silences for user-defined projects You can grant the permissions by assigning one of the following monitoring roles or cluster roles: Table 4.2. Monitoring roles Role name Description Project user-workload-monitoring-config-edit Users with this role can edit the user-workload-monitoring-config ConfigMap object to configure Prometheus, Prometheus Operator, Alertmanager, and Thanos Ruler for user-defined workload monitoring. openshift-user-workload-monitoring monitoring-alertmanager-api-reader Users with this role have read access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled. openshift-user-workload-monitoring monitoring-alertmanager-api-writer Users with this role have read and write access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled. openshift-user-workload-monitoring Table 4.3. Monitoring cluster roles Cluster role name Description Project monitoring-rules-view Users with this cluster role have read access to PrometheusRule custom resources (CRs) for user-defined projects. They can also view the alerts and silences in the Developer perspective of the OpenShift Container Platform web console. Can be bound with RoleBinding to any user project. monitoring-rules-edit Users with this cluster role can create, modify, and delete PrometheusRule CRs for user-defined projects. They can also manage alerts and silences in the Developer perspective of the OpenShift Container Platform web console. Can be bound with RoleBinding to any user project. monitoring-edit Users with this cluster role have the same privileges as users with the monitoring-rules-edit cluster role. Additionally, users can create, read, modify, and delete ServiceMonitor and PodMonitor resources to scrape metrics from services and pods. Can be bound with RoleBinding to any user project. alert-routing-edit Users with this cluster role can create, update, and delete AlertmanagerConfig CRs for user-defined projects. Can be bound with RoleBinding to any user project. Additional resources CMO services resources Granting users permission to configure monitoring for user-defined projects Granting users permission to configure alert routing for user-defined projects 4.1.4.1. Granting user permissions by using the web console You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. Procedure In the Administrator perspective of the OpenShift Container Platform web console, go to User Management RoleBindings Create binding . In the Binding Type section, select the Namespace Role Binding type. In the Name field, enter a name for the role binding. In the Namespace field, select the project where you want to grant the access. Important The monitoring role or cluster role permissions that you grant to a user by using this procedure apply only to the project that you select in the Namespace field. Select a monitoring role or cluster role from the Role Name list. In the Subject section, select User . In the Subject Name field, enter the name of the user. Select Create to apply the role binding. 4.1.4.2. Granting user permissions by using the CLI You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift CLI ( oc ). Important Whichever role or cluster role you choose, you must bind it against a specific project as a cluster administrator. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure To assign a monitoring role to a user for a project, enter the following command: USD oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1 1 Substitute <role> with the wanted monitoring role, <user> with the user to whom you want to assign the role, and <namespace> with the project where you want to grant the access. To assign a monitoring cluster role to a user for a project, enter the following command: USD oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1 1 Substitute <cluster-role> with the wanted monitoring cluster role, <user> with the user to whom you want to assign the cluster role, and <namespace> with the project where you want to grant the access. 4.1.5. Excluding a user-defined project from monitoring Individual user-defined projects can be excluded from user workload monitoring. To do so, add the openshift.io/user-monitoring label to the project's namespace with a value of false . Procedure Add the label to the project namespace: USD oc label namespace my-project 'openshift.io/user-monitoring=false' To re-enable monitoring, remove the label from the namespace: USD oc label namespace my-project 'openshift.io/user-monitoring-' Note If there were any active monitoring targets for the project, it may take a few minutes for Prometheus to stop scraping them after adding the label. 4.1.6. Disabling monitoring for user-defined projects After enabling monitoring for user-defined projects, you can disable it again by setting enableUserWorkload: false in the cluster monitoring ConfigMap object. Note Alternatively, you can remove enableUserWorkload: true to disable monitoring for user-defined projects. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Set enableUserWorkload: to false under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false Save the file to apply the changes. Monitoring for user-defined projects is then disabled automatically. Check that the prometheus-operator , prometheus-user-workload and thanos-ruler-user-workload pods are terminated in the openshift-user-workload-monitoring project. This might take a short while: USD oc -n openshift-user-workload-monitoring get pod Example output No resources found in openshift-user-workload-monitoring project. Note The user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project is not automatically deleted when monitoring for user-defined projects is disabled. This is to preserve any custom configurations that you may have created in the ConfigMap object. 4.2. Configuring performance and scalability for user workload monitoring You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources. 4.2.1. Controlling the placement and distribution of monitoring components You can move the monitoring stack components to specific nodes: Use the nodeSelector constraint with labeled nodes to move any of the monitoring stack components to specific nodes. Assign tolerations to enable moving components to tainted nodes. By doing so, you control the placement and distribution of the monitoring components across a cluster. By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies. Additional resources Using node selectors to move monitoring components 4.2.1.1. Moving monitoring components to different nodes You can move any of the components that monitor workloads for user-defined projects to specific worker nodes. Warning It is not permitted to move components to control plane or infrastructure nodes. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: USD oc label nodes <node_name> <node_label> 1 1 Replace <node_name> with the name of the node where you want to add the label. Replace <node_label> with the name of the wanted label. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Specify the node labels for the nodeSelector constraint for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | # ... <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 # ... 1 Substitute <component> with the appropriate monitoring stack component name. 2 Substitute <node_label_1> with the label you added to the node. 3 Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels. Note If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations. Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed. Additional resources Enabling monitoring for user-defined projects Understanding how to update labels on nodes Placing pods on specific nodes using node selectors nodeSelector (Kubernetes documentation) 4.2.1.2. Assigning tolerations to monitoring components You can assign tolerations to the components that monitor user-defined projects, to enable moving them to tainted worker nodes. Scheduling is not permitted on control plane or infrastructure nodes. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Specify tolerations for the component: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification> Substitute <component> and <toleration_specification> accordingly. For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1 . This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the thanosRuler component to tolerate the example taint: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Enabling monitoring for user-defined projects Controlling pod placement using node taints Taints and Tolerations (Kubernetes documentation) 4.2.2. Managing CPU and memory resources for monitoring components You can ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components. You can configure these limits and requests for monitoring components that monitor user-defined projects in the openshift-user-workload-monitoring namespace. 4.2.2.1. Specifying limits and requests To configure CPU and memory resources, specify values for resource limits and requests in the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add values to define resource limits and requests for each component you want to configure. Important Ensure that the value set for a limit is always higher than the value set for a request. Otherwise, an error will occur, and the container will not run. Example of setting resource limits and requests apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheus: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosRuler: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources About specifying limits and requests for monitoring components Kubernetes requests and limits documentation (Kubernetes documentation) 4.2.3. Controlling the impact of unbound metrics attributes in user-defined projects Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects: Limit the number of samples that can be accepted per target scrape in user-defined projects Limit the number of scraped labels, the length of label names, and the length of label values Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped Note Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Additional resources Controlling the impact of unbound metrics attributes in user-defined projects Enabling monitoring for user-defined projects Determining why Prometheus is consuming a lot of disk space 4.2.3.1. Setting scrape sample and label limits for user-defined projects You can limit the number of samples that can be accepted per target scrape in user-defined projects. You can also limit the number of scraped labels, the length of label names, and the length of label values. Warning If you set sample or label limits, no further sample data is ingested for that target scrape after the limit is reached. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the enforcedSampleLimit configuration to data/config.yaml to limit the number of samples that can be accepted per target scrape in user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1 1 A value is required if this parameter is specified. This enforcedSampleLimit example limits the number of samples that can be accepted per target scrape in user-defined projects to 50,000. Add the enforcedLabelLimit , enforcedLabelNameLengthLimit , and enforcedLabelValueLengthLimit configurations to data/config.yaml to limit the number of scraped labels, the length of label names, and the length of label values in user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedLabelLimit: 500 1 enforcedLabelNameLengthLimit: 50 2 enforcedLabelValueLengthLimit: 600 3 1 Specifies the maximum number of labels per scrape. The default value is 0 , which specifies no limit. 2 Specifies the maximum length in characters of a label name. The default value is 0 , which specifies no limit. 3 Specifies the maximum length in characters of a label value. The default value is 0 , which specifies no limit. Save the file to apply the changes. The limits are applied automatically. 4.2.3.2. Creating scrape sample alerts You can create alerts that notify you when: The target cannot be scraped or is not available for the specified for duration A scrape sample threshold is reached or is exceeded for the specified for duration Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit . You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf "%.4g" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: (scrape_samples_post_metric_relabeling / (scrape_sample_limit > 0)) > 0.9 9 for: 10m 10 labels: severity: warning 11 1 Defines the name of the alerting rule. 2 Specifies the user-defined project where the alerting rule is deployed. 3 The TargetDown alert fires if the target cannot be scraped and is not available for the for duration. 4 The message that is displayed when the TargetDown alert fires. 5 The conditions for the TargetDown alert must be true for this duration before the alert is fired. 6 Defines the severity for the TargetDown alert. 7 The ApproachingEnforcedSamplesLimit alert fires when the defined scrape sample threshold is exceeded and lasts for the specified for duration. 8 The message that is displayed when the ApproachingEnforcedSamplesLimit alert fires. 9 The threshold for the ApproachingEnforcedSamplesLimit alert. In this example, the alert fires when the number of ingested samples exceeds 90% of the configured limit. 10 The conditions for the ApproachingEnforcedSamplesLimit alert must be true for this duration before the alert is fired. 11 Defines the severity for the ApproachingEnforcedSamplesLimit alert. Apply the configuration to the user-defined project: USD oc apply -f monitoring-stack-alerts.yaml Additionally, you can check if a target has hit the configured limit: In the Administrator perspective of the web console, go to Observe Targets and select an endpoint with a Down status that you want to check. The Scrape failed: sample limit exceeded message is displayed if the endpoint failed because of an exceeded sample limit. 4.2.4. Configuring pod topology spread constraints You can configure pod topology spread constraints for all the pods for user-defined monitoring to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. You can configure pod topology spread constraints for monitoring pods by using the user-workload-monitoring-config config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the following settings under the data/config.yaml field to configure pod topology spread constraints: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option> 1 Specify a name of the component for which you want to set up pod topology spread constraints. 2 Specify a numeric value for maxSkew , which defines the degree to which pods are allowed to be unevenly distributed. 3 Specify a key of node labels for topologyKey . Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler tries to put a balanced number of pods into each domain. 4 Specify a value for whenUnsatisfiable . Available options are DoNotSchedule and ScheduleAnyway . Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. 5 Specify labelSelector to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Example configuration for Thanos Ruler apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/name: thanos-ruler Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources About pod topology spread constraints for monitoring Controlling pod placement by using pod topology spread constraints Pod Topology Spread Constraints (Kubernetes documentation) 4.3. Storing and recording data for user workload monitoring Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting. 4.3.1. Configuring persistent storage Run cluster monitoring with persistent storage to gain the following benefits: Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated. Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted. For production environments, it is highly recommended to configure persistent storage. Important In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability. 4.3.1.1. Persistent storage prerequisites Dedicate sufficient persistent storage to ensure that the disk does not become full. Use Filesystem as the storage type value for the volumeMode parameter when you configure the persistent volume. Important Do not use a raw block volume, which is described with volumeMode: Block in the PersistentVolume resource. Prometheus cannot use raw block volumes. Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant. 4.3.1.2. Configuring a persistent volume claim To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC). Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add your PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3 1 Specify the monitoring component for which you want to configure the PVC. 2 Specify an existing storage class. If a storage class is not specified, the default storage class is used. 3 Specify the amount of required storage. The following example configures a PVC that claims persistent storage for Thanos Ruler: Example PVC configuration apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi Note Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied. Warning When you update the config map with a PVC configuration, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Understanding persistent storage PersistentVolumeClaims (Kubernetes documentation) 4.3.1.3. Resizing a persistent volume You can resize a persistent volume (PV) for the instances of Prometheus, Thanos Ruler, and Alertmanager. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured. Important You can only expand the size of the PVC. Shrinking the storage size is not possible. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have configured at least one PVC for components that monitor user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes . Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a new storage size for the PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2 1 The component for which you want to change the storage size. 2 Specify the new size for the storage volume. It must be greater than the value. The following example sets the new PVC request to 20 gigabytes for Thanos Ruler: Example storage configuration for thanosRuler apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi Note Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Warning When you update the config map with a new storage size, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Prometheus database storage requirements Expanding persistent volume claims (PVCs) with a file system 4.3.2. Modifying retention time and size for Prometheus metrics data By default, Prometheus retains metrics data for 24 hours for monitoring for user-defined projects. You can modify the retention time for the Prometheus instance to change when the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. Note Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize limit. In such cases, the KubePersistentVolumeFillingUp alert fires until the space on a PV is lower than the retentionSize limit. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the retention time and size configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2 1 The retention time: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . 2 The retention size: a number directly followed by B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes). The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance: Example of setting retention time for Prometheus apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 4.3.2.1. Modifying the retention time for Thanos Ruler metrics data By default, for user-defined projects, Thanos Ruler automatically retains metrics data for 24 hours. You can modify the retention time to change how long this data is retained by specifying a time value in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the retention time configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1 1 Specify the retention time in the following format: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . The default is 24h . The following example sets the retention time to 10 days for Thanos Ruler data: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Retention time and size for Prometheus metrics Enabling monitoring for user-defined projects Prometheus database storage requirements Recommended configurable storage technology Understanding persistent storage Optimizing storage 4.3.3. Setting log levels for monitoring components You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and Thanos Ruler. The following log levels can be applied to the relevant component in the user-workload-monitoring-config ConfigMap object: debug . Log debug, informational, warning, and error messages. info . Log informational, warning, and error messages. warn . Log warning and error messages only. error . Log error messages only. The default log level is info . Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: <log_level> for a component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2 1 The monitoring stack component for which you are setting a log level. Available component values are prometheus , alertmanager , prometheusOperator , and thanosRuler . 2 The log level to set for the component. The available values are error , warn , info , and debug . The default value is info . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Confirm that the log level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level for the prometheus-operator deployment: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Check that the pods for the component are running. The following example lists the status of pods: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized logLevel value is included in the ConfigMap object, the pods for the component might not restart successfully. 4.3.4. Enabling the query log file for Prometheus You can configure Prometheus to write all queries that have been run by the engine to a log file. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the queryLogFile parameter for Prometheus under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1 1 Add the full path to the file in which queries will be logged. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Verify that the pods for the component are running. The following sample command lists the status of pods: USD oc -n openshift-user-workload-monitoring get pods Example output ... prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m ... Read the query log: USD oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path> Important Revert the setting in the config map after you have examined the logged query information. Additional resources Enabling monitoring for user-defined projects 4.4. Configuring metrics for user workload monitoring Configure the collection of metrics to monitor how cluster components and your own workloads are performing. You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters. Additional resources Understanding metrics 4.4.1. Configuring remote write storage You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. Important Red Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red Hat production support. You have set up authentication credentials in a Secret object for the remote write endpoint. You must create the secret in the openshift-user-workload-monitoring namespace. Warning To reduce security risks, use HTTPS and authentication to send metrics to an endpoint. Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a remoteWrite: section under data/config.yaml/prometheus , as shown in the following example: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" 1 <endpoint_authentication_credentials> 2 1 The URL of the remote write endpoint. 2 The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an Authorization request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods. Add write relabel configuration values after the authentication credentials: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1 1 Add configuration for metrics that you want to send to the remote endpoint. Example of forwarding a single metric called my_metric apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep Example of forwarding metrics called my_metric_1 and my_metric_2 in my_namespace namespace apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep Save the file to apply the changes. The new configuration is applied automatically. 4.4.1.1. Supported remote write authentication settings You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, basic authentication, authorization, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write. Authentication method Config map field Description AWS Signature Version 4 sigv4 This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method simultaneously with authorization, OAuth 2.0, or Basic authentication. Basic authentication basicAuth Basic authentication sets the authorization header on every remote write request with the configured username and password. authorization authorization Authorization sets the Authorization header on every remote write request using the configured token. OAuth 2.0 oauth2 An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from tokenUrl with the specified client ID and client secret to access the remote write endpoint. You cannot use this method simultaneously with authorization, AWS Signature Version 4, or Basic authentication. TLS client tlsConfig A TLS client configuration specifies the CA certificate, the client certificate, and the client key file information used to authenticate with the remote write endpoint server using TLS. The sample configuration assumes that you have already created a CA certificate file, a client certificate file, and a client key file. 4.4.1.2. Example remote write authentication settings The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with monitoring for user-defined projects in the openshift-user-workload-monitoring namespace. 4.4.1.2.1. Sample YAML for AWS Signature Version 4 authentication The following shows the settings for a sigv4 secret named sigv4-credentials in the openshift-user-workload-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-user-workload-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque 1 The AWS API access key. 2 The AWS API secret key. The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret object named sigv4-credentials in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://authorization.example.com/api/write" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7 1 The AWS region. 2 4 The name of the Secret object containing the AWS API access credentials. 3 The key that contains the AWS API access key in the specified Secret object. 5 The key that contains the AWS API secret key in the specified Secret object. 6 The name of the AWS profile that is being used to authenticate. 7 The unique identifier for the Amazon Resource Name (ARN) assigned to your role. 4.4.1.2.2. Sample YAML for Basic authentication The following shows sample Basic authentication settings for a Secret object named rw-basic-auth in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-user-workload-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque 1 The username. 2 The password. The following sample shows a basicAuth remote write configuration that uses a Secret object named rw-basic-auth in the openshift-user-workload-monitoring namespace. It assumes that you have already set up authentication credentials for the endpoint. apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://basicauth.example.com/api/write" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4 1 3 The name of the Secret object that contains the authentication credentials. 2 The key that contains the username in the specified Secret object. 4 The key that contains the password in the specified Secret object. 4.4.1.2.3. Sample YAML for authentication with a bearer token using a Secret Object The following shows bearer token settings for a Secret object named rw-bearer-auth in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-user-workload-monitoring stringData: token: <authentication_token> 1 type: Opaque 1 The authentication token. The following shows sample bearer token config map settings that use a Secret object named rw-bearer-auth in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | enableUserWorkload: true prometheus: remoteWrite: - url: "https://authorization.example.com/api/write" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3 1 The authentication type of the request. The default value is Bearer . 2 The name of the Secret object that contains the authentication credentials. 3 The key that contains the authentication token in the specified Secret object. 4.4.1.2.4. Sample YAML for OAuth 2.0 authentication The following shows sample OAuth 2.0 settings for a Secret object named oauth2-credentials in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-user-workload-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque 1 The Oauth 2.0 ID. 2 The OAuth 2.0 secret. The following shows an oauth2 remote write authentication sample configuration that uses a Secret object named oauth2-credentials in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://test.example.com/api/write" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2> 1 3 The name of the corresponding Secret object. Note that ClientId can alternatively refer to a ConfigMap object, although clientSecret must refer to a Secret object. 2 4 The key that contains the OAuth 2.0 credentials in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . 6 The OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access. 7 The OAuth 2.0 authorization request parameters required for the authorization server. 4.4.1.2.5. Sample YAML for TLS client authentication The following shows sample TLS client settings for a tls Secret object named mtls-bundle in the openshift-user-workload-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-user-workload-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls 1 The CA certificate in the Prometheus container with which to validate the server certificate. 2 The client certificate for authentication with the server. 3 The client key. The following sample shows a tlsConfig remote write authentication configuration that uses a TLS Secret object named mtls-bundle . apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6 1 3 5 The name of the corresponding Secret object that contains the TLS authentication credentials. Note that ca and cert can alternatively refer to a ConfigMap object, though keySecret must refer to a Secret object. 2 The key in the specified Secret object that contains the CA certificate for the endpoint. 4 The key in the specified Secret object that contains the client certificate for the endpoint. 6 The key in the specified Secret object that contains the client key secret. 4.4.1.3. Example remote write queue configuration You can use the queueConfig object for remote write to tune the remote write queue parameters. The following example shows the queue parameters with their default values for monitoring for user-defined projects in the openshift-user-workload-monitoring namespace. Example configuration of remote write parameters with default values apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9 1 The number of samples to buffer per shard before they are dropped from the queue. 2 The minimum number of shards. 3 The maximum number of shards. 4 The maximum number of samples per send. 5 The maximum time for a sample to wait in buffer. 6 The initial time to wait before retrying a failed request. The time gets doubled for every retry up to the maxbackoff time. 7 The maximum time to wait before retrying a failed request. 8 Set this parameter to true to retry a request after receiving a 429 status code from the remote write storage. 9 The samples that are older than the sampleAgeLimit limit are dropped from the queue. If the value is undefined or set to 0s , the parameter is ignored. Additional resources Prometheus REST API reference for remote write Setting up remote write compatible endpoints (Prometheus documentation) Tuning remote write settings (Prometheus documentation) Understanding secrets 4.4.2. Creating cluster ID labels for metrics You can create cluster ID labels for metrics by adding the write_relabel settings for remote write storage in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Note When Prometheus scrapes user workload targets that expose a namespace label, the system stores this label as exported_namespace . This behavior ensures that the final namespace label value is equal to the namespace of the target pod. You cannot override this default configuration by setting the value of the honorLabels field to true for PodMonitor or ServiceMonitor objects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). You have configured remote write storage. Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config In the writeRelabelConfigs: section under data/config.yaml/prometheus/remoteWrite , add cluster ID relabel configuration values: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2 1 Add a list of write relabel configurations for metrics that you want to send to the remote endpoint. 2 Substitute the label configuration for the metrics sent to the remote write endpoint. The following sample shows how to forward a metric with the cluster ID label cluster_id : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3 1 The system initially applies a temporary cluster ID source label named __tmp_openshift_cluster_id__ . This temporary label gets replaced by the cluster ID label name that you specify. 2 Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use __tmp_openshift_cluster_id__ . The final relabeling step removes labels that use this name. 3 The replace write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified. Save the file to apply the changes. The new configuration is applied automatically. Additional resources Adding cluster ID labels to metrics Obtaining your cluster ID 4.4.3. Setting up metrics collection for user-defined projects You can create a ServiceMonitor resource to scrape metrics from a service endpoint in a user-defined project. This assumes that your application uses a Prometheus client library to expose metrics to the /metrics canonical name. This section describes how to deploy a sample service in a user-defined project and then create a ServiceMonitor resource that defines how that service should be monitored. 4.4.3.1. Deploying a sample service To test monitoring of a service in a user-defined project, you can deploy a sample service. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. Procedure Create a YAML file for the service configuration. In this example, it is called prometheus-example-app.yaml . Add the following deployment and service configuration details to the file: apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP This configuration deploys a service named prometheus-example-app in the user-defined ns1 project. This service exposes the custom version metric. Apply the configuration to the cluster: USD oc apply -f prometheus-example-app.yaml It takes some time to deploy the service. You can check that the pod is running: USD oc -n ns1 get pod Example output NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m 4.4.3.2. Specifying how a service is monitored To use the metrics exposed by your service, you must configure OpenShift Container Platform monitoring to scrape metrics from the /metrics endpoint. You can do this using a ServiceMonitor custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor CRD that specifies how a pod should be monitored. The former requires a Service object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod. This procedure shows you how to create a ServiceMonitor resource for a service in a user-defined project. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or the monitoring-edit cluster role. You have enabled monitoring for user-defined projects. For this example, you have deployed the prometheus-example-app sample service in the ns1 project. Note The prometheus-example-app sample service does not support TLS authentication. Procedure Create a new YAML configuration file named example-app-service-monitor.yaml . Add a ServiceMonitor resource to the YAML file. The following example creates a service monitor named prometheus-example-monitor to scrape metrics exposed by the prometheus-example-app service in the ns1 namespace: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app 1 Specify a user-defined namespace where your service runs. 2 Specify endpoint ports to be scraped by Prometheus. 3 Configure a selector to match your service based on its metadata labels. Note A ServiceMonitor resource in a user-defined namespace can only discover services in the same namespace. That is, the namespaceSelector field of the ServiceMonitor resource is always ignored. Apply the configuration to the cluster: USD oc apply -f example-app-service-monitor.yaml It takes some time to deploy the ServiceMonitor resource. Verify that the ServiceMonitor resource is running: USD oc -n <namespace> get servicemonitor Example output NAME AGE prometheus-example-monitor 81m 4.4.3.3. Example service endpoint authentication settings You can configure authentication for service endpoints for user-defined project monitoring by using ServiceMonitor and PodMonitor custom resource definitions (CRDs). The following samples show different authentication settings for a ServiceMonitor resource. Each sample shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. 4.4.3.3.1. Sample YAML authentication with a bearer token The following sample shows bearer token settings for a Secret object named example-bearer-auth in the ns1 namespace: Example bearer token secret apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1 1 Specify an authentication token. The following sample shows bearer token authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-bearer-auth : Example bearer token authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the authentication token in the specified Secret object. 2 The name of the Secret object that contains the authentication credentials. Important Do not use bearerTokenFile to configure bearer token. If you use the bearerTokenFile configuration, the ServiceMonitor resource is rejected. 4.4.3.3.2. Sample YAML for Basic authentication The following sample shows Basic authentication settings for a Secret object named example-basic-auth in the ns1 namespace: Example Basic authentication secret apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2 1 Specify a username for authentication. 2 Specify a password for authentication. The following sample shows Basic authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-basic-auth : Example Basic authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the username in the specified Secret object. 2 4 The name of the Secret object that contains the Basic authentication. 3 The key that contains the password in the specified Secret object. 4.4.3.3.3. Sample YAML authentication with OAuth 2.0 The following sample shows OAuth 2.0 settings for a Secret object named example-oauth2 in the ns1 namespace: Example OAuth 2.0 secret apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 1 Specify an Oauth 2.0 ID. 2 Specify an Oauth 2.0 secret. The following sample shows OAuth 2.0 authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-oauth2 : Example OAuth 2.0 authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the OAuth 2.0 ID in the specified Secret object. 2 4 The name of the Secret object that contains the OAuth 2.0 credentials. 3 The key that contains the OAuth 2.0 secret in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . Additional resources Enabling monitoring for user-defined projects Scrape Prometheus metrics using TLS in ServiceMonitor configuration (Red Hat Customer Portal article) PodMonitor API ServiceMonitor API 4.5. Configuring alerts and notifications for user workload monitoring You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information. 4.5.1. Configuring external Alertmanager instances The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. You can add external Alertmanager instances to route alerts for user-defined projects. If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add an additionalAlertmanagerConfigs section with configuration details under data/config.yaml/<component> : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 additionalAlertmanagerConfigs: - <alertmanager_specification> 2 2 Substitute <alertmanager_specification> with authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token ( bearerToken ) and client TLS ( tlsConfig ). 1 Substitute <component> for one of two supported external Alertmanager components: prometheus or thanosRuler . The following sample config map configures an additional Alertmanager for Thanos Ruler by using a bearer token with client TLS authentication: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 4.5.2. Configuring secrets for Alertmanager The OpenShift Container Platform monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver. For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret object rather than in the ConfigMap object. 4.5.2.1. Adding a secret to the Alertmanager configuration You can add secrets to the Alertmanager configuration by editing the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project. After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name> within the alertmanager container for the Alertmanager pods. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have created the secret to be configured in Alertmanager in the openshift-user-workload-monitoring project. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a secrets: section under data/config.yaml/alertmanager with the following configuration: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2> 1 This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. 2 The name of the Secret object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line. The following sample config map settings configure Alertmanager to use two Secret objects named test-secret-basic-auth and test-secret-api-token : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: - test-secret-basic-auth - test-secret-api-token Save the file to apply the changes. The new configuration is applied automatically. 4.5.3. Attaching additional labels to your time series and alerts You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Define labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards. Note In the openshift-user-workload-monitoring project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Setting externalLabels for prometheus in the user-workload-monitoring-config ConfigMap object will only configure external labels for metrics and not for any rules. For example, to add metadata about the region and environment to all time series and alerts, use the following example: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Enabling monitoring for user-defined projects 4.5.4. Configuring alert notifications In OpenShift Container Platform, an administrator can enable alert routing for user-defined projects with one of the following methods: Use the default platform Alertmanager instance. Use a separate Alertmanager instance only for user-defined projects. Developers and other users with the alert-routing-edit cluster role can configure custom alert notifications for their user-defined projects by configuring alert receivers. Note Review the following limitations of alert routing for user-defined projects: User-defined alert routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace ns1 only applies to PrometheusRules resources in the same namespace. When a namespace is excluded from user-defined monitoring, AlertmanagerConfig resources in the namespace cease to be part of the Alertmanager configuration. Additional resources Understanding alert routing for user-defined projects Sending notifications to external systems PagerDuty (PagerDuty official site) Prometheus Integration Guide (PagerDuty official site) Support version matrix for monitoring components Enabling alert routing for user-defined projects 4.5.4.1. Configuring alert routing for user-defined projects If you are a non-administrator user who has been given the alert-routing-edit cluster role, you can create or edit alert routing for user-defined projects. Prerequisites A cluster administrator has enabled monitoring for user-defined projects. A cluster administrator has enabled alert routing for user-defined projects. You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml . Add an AlertmanagerConfig YAML definition to the file. For example: apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post Save the file. Apply the resource to the cluster: USD oc apply -f example-app-alert-routing.yaml The configuration is automatically applied to the Alertmanager pods. 4.5.4.2. Configuring alert routing for user-defined projects with the Alertmanager secret If you have enabled a separate instance of Alertmanager that is dedicated to user-defined alert routing, you can customize where and how the instance sends notifications by editing the alertmanager-user-workload secret in the openshift-user-workload-monitoring namespace. Note All features of a supported version of upstream Alertmanager are also supported in an OpenShift Container Platform Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation). Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have enabled a separate instance of Alertmanager for user-defined alert routing. You have installed the OpenShift CLI ( oc ). Procedure Print the currently active Alertmanager configuration into the file alertmanager.yaml : USD oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml Edit the configuration in alertmanager.yaml : route: receiver: Default group_by: - name: Default routes: - matchers: - "service = prometheus-example-monitor" 1 receiver: <receiver> 2 receivers: - name: Default - name: <receiver> <receiver_configuration> 3 1 Specify labels to match your alerts. This example targets all alerts that have the service="prometheus-example-monitor" label. 2 Specify the name of the receiver to use for the alerts group. 3 Specify the receiver configuration. Apply the new configuration in the file: USD oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=- 4.5.4.3. Configuring different alert receivers for default platform alerts and user-defined alerts You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results: All default platform alerts are sent to a receiver owned by the team in charge of these alerts. All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts. You can achieve this by using the openshift_io_alert_source="platform" label that is added by the Cluster Monitoring Operator to all platform alerts: Use the openshift_io_alert_source="platform" matcher to match default platform alerts. Use the openshift_io_alert_source!="platform" or 'openshift_io_alert_source=""' matcher to match user-defined alerts. Note This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts.
[ "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1", "oc -n openshift-user-workload-monitoring get pod", "NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h", "oc -n openshift-user-workload-monitoring adm policy add-role-to-user user-workload-monitoring-config-edit <user> --role-namespace openshift-user-workload-monitoring", "oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring", "oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring", "Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # alertmanagerMain: enableUserAlertmanagerConfig: true 1 #", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2", "oc -n openshift-user-workload-monitoring get alertmanager", "NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s", "oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1", "oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1", "oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1", "oc label namespace my-project 'openshift.io/user-monitoring=false'", "oc label namespace my-project 'openshift.io/user-monitoring-'", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false", "oc -n openshift-user-workload-monitoring get pod", "No resources found in openshift-user-workload-monitoring project.", "oc label nodes <node_name> <node_label> 1", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | # <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 #", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheus: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosRuler: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedLabelLimit: 500 1 enforcedLabelNameLengthLimit: 50 2 enforcedLabelValueLengthLimit: 600 3", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf \"%.4g\" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: (scrape_samples_post_metric_relabeling / (scrape_sample_limit > 0)) > 0.9 9 for: 10m 10 labels: severity: warning 11", "oc apply -f monitoring-stack-alerts.yaml", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/name: thanos-ruler", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1", "oc -n openshift-user-workload-monitoring get pods", "prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m", "oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" 1 <endpoint_authentication_credentials> 2", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep", "apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-user-workload-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://authorization.example.com/api/write\" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7", "apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-user-workload-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://basicauth.example.com/api/write\" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4", "apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-user-workload-monitoring stringData: token: <authentication_token> 1 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | enableUserWorkload: true prometheus: remoteWrite: - url: \"https://authorization.example.com/api/write\" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3", "apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-user-workload-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://test.example.com/api/write\" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2>", "apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-user-workload-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3", "apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP", "oc apply -f prometheus-example-app.yaml", "oc -n ns1 get pod", "NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app", "oc apply -f example-app-service-monitor.yaml", "oc -n <namespace> get servicemonitor", "NAME AGE prometheus-example-monitor 81m", "apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app", "apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app", "apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 additionalAlertmanagerConfigs: - <alertmanager_specification> 2", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: - test-secret-basic-auth - test-secret-api-token", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod", "apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post", "oc apply -f example-app-alert-routing.yaml", "oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml", "route: receiver: Default group_by: - name: Default routes: - matchers: - \"service = prometheus-example-monitor\" 1 receiver: <receiver> 2 receivers: - name: Default - name: <receiver> <receiver_configuration> 3", "oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=-" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/monitoring/configuring-user-workload-monitoring
6.4.3. Noop
6.4.3. Noop The Noop I/O scheduler implements a simple first-in first-out (FIFO) scheduling algorithm. Merging of requests happens at the generic block layer, but is a simple last-hit cache. If a system is CPU-bound and the storage is fast, this can be the best I/O scheduler to use. Following are the tunables available for the block layer. /sys/block/sdX/queue tunables add_random In some cases, the overhead of I/O events contributing to the entropy pool for /dev/random is measurable. In such cases, it may be desirable to set this value to 0. max_sectors_kb By default, the maximum request size sent to disk is 512 KB. This tunable can be used to either raise or lower that value. The minimum value is limited by the logical block size; the maximum value is limited by max_hw_sectors_kb . There are some SSDs which perform worse when I/O sizes exceed the internal erase block size. In such cases, it is recommended to tune max_hw_sectors_kb down to the erase block size. You can test for this using an I/O generator such as iozone or aio-stress , varying the record size from, for example, 512 bytes to 1 MB. nomerges This tunable is primarily a debugging aid. Most workloads benefit from request merging (even on faster storage such as SSDs). In some cases, however, it is desirable to disable merging, such as when you want to see how many IOPS a storage back-end can process without disabling read-ahead or performing random I/O. nr_requests Each request queue has a limit on the total number of request descriptors that can be allocated for each of read and write I/Os. By default, the number is 128 , meaning 128 reads and 128 writes can be queued at a time before putting a process to sleep. The process put to sleep is the to try to allocate a request, not necessarily the process that has allocated all of the available requests. If you have a latency-sensitive application, then you should consider lowering the value of nr_requests in your request queue and limiting the command queue depth on the storage to a low number (even as low as 1 ), so that writeback I/O cannot allocate all of the available request descriptors and fill up the device queue with write I/O. Once nr_requests have been allocated, all other processes attempting to perform I/O will be put to sleep to wait for requests to become available. This makes things more fair, as the requests are then distributed in a round-robin fashion (instead of letting one process consume them all in rapid succession). Note that this is only a problem when using the deadline or noop schedulers, as the default CFQ configuration protects against this situation. optimal_io_size In some circumstances, the underlying storage will report an optimal I/O size. This is most common in hardware and software RAID, where the optimal I/O size is the stripe size. If this value is reported, applications should issue I/O aligned to and in multiples of the optimal I/O size whenever possible. read_ahead_kb The operating system can detect when an application is reading data sequentially from a file or from disk. In such cases, it performs an intelligent read-ahead algorithm, whereby more data than is requested by the user is read from disk. Thus, when the user attempts to read a block of data, it will already by in the operating system's page cache. The potential down side to this is that the operating system can read more data from disk than necessary, which occupies space in the page cache until it is evicted because of high memory pressure. Having multiple processes doing false read-ahead would increase memory pressure in this circumstance. For device mapper devices, it is often a good idea to increase the value of read_ahead_kb to a large number, such as 8192 . The reason is that a device mapper device is often made up of multiple underlying devices. Setting this value to the default ( 128 KB) multiplied by the number of devices you are mapping is a good starting point for tuning. rotational Traditional hard disks have been rotational (made up of spinning platters). SSDs, however, are not. Most SSDs will advertise this properly. If, however, you come across a device that does not advertise this flag properly, it may be necessary to set rotational to 0 manually; when rotational is disabled, the I/O elevator does not use logic that is meant to reduce seeks, since there is little penalty for seek operations on non-rotational media. rq_affinity I/O completions can be processed on a different CPU from the one that issued the I/O. Setting rq_affinity to 1 causes the kernel to deliver completions to the CPU on which the I/O was issued. This can improve CPU data caching effectiveness.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/ch06s04s03
Chapter 6. Creating an instance with a guaranteed minimum bandwidth QoS
Chapter 6. Creating an instance with a guaranteed minimum bandwidth QoS You can create instances that request a guaranteed minimum bandwidth by using a Quality of Service (QoS) policy. QoS policies with a guaranteed minimum bandwidth rule are assigned to ports on a specific physical network. When you create an instance that uses the configured port, the Compute scheduling service selects a host for the instance that satisfies this request. The Compute scheduling service checks the Placement service for the amount of bandwidth reserved by other instances on each physical interface, before selecting a host to deploy an instance on. Limitations/Restrictions You can only assign a guaranteed minimum bandwidth QoS policy when creating a new instance. You cannot assign a guaranteed minimum bandwidth QoS policy to instances that are already running, as the Compute service only updates resource usage for an instance in placement during creation or move operations, which means the minimum bandwidth available to the instance cannot be guaranteed. Prerequisites A QoS policy is available that has a minimum bandwidth rule. For more information, see Configuring Quality of Service (QoS) policies in the Configuring Red Hat OpenStack Platform networking guide. Procedure List the available QoS policies: Check the rules of each of the available policies to determine which has the required minimum bandwidth: Create a port from the appropriate policy: Create an instance, specifying the NIC port to use: An "ACTIVE" status in the output indicates that you have successfully created the instance on a host that can provide the requested guaranteed minimum bandwidth. 6.1. Removing a guaranteed minimum bandwidth QoS from an instance If you want to lift the guaranteed minimum bandwidth QoS policy restriction from an instance, you can detach the interface. Procedure To detach the interface, enter the following command:
[ "(overcloud)USD openstack network qos policy list", "-------------------------------------- --------- -------- ---------+ | ID | Name | Shared | Default | Project | -------------------------------------- --------- -------- ---------+ | 6d771447-3cf4-4ef1-b613-945e990fa59f | policy2 | True | False | ba4de51bf7694228a350dd22b7a3dc24 | | 78a24462-e3c1-4e66-a042-71131a7daed5 | policy1 | True | False | ba4de51bf7694228a350dd22b7a3dc24 | | b80acc64-4fc2-41f2-a346-520d7cfe0e2b | policy0 | True | False | ba4de51bf7694228a350dd22b7a3dc24 | -------------------------------------- --------- -------- ---------+", "(overcloud)USD openstack network qos policy show policy0", "------------- ---------------------------------------------------------------------------------------+ | Field | Value | ------------- ---------------------------------------------------------------------------------------+ | description | | | id | b80acc64-4fc2-41f2-a346-520d7cfe0e2b | | is_default | False | | location | cloud= ', project.domain_id=, project.domain_name='Default , project.id= ba4de51bf7694228a350dd22b7a3dc24 , project.name= admin , region_name= regionOne , zone= | | name | policy0 | | project_id | ba4de51bf7694228a350dd22b7a3dc24 | | rules | [{ min_kbps : 100000, direction : egress , id : d46218fe-9218-4e96-952b-9f45a5cb3b3c , qos_policy_id : b80acc64-4fc2-41f2-a346-520d7cfe0e2b , type : minimum_bandwidth }, { min_kbps : 100000, direction : ingress , id : 1202c4e3-a03a-464c-80d5-0bf90bb74c9d , qos_policy_id : b80acc64-4fc2-41f2-a346-520d7cfe0e2b , type : minimum_bandwidth }] | | shared | True | | tags | [] | ------------- ---------------------------------------------------------------------------------------+", "(overcloud)USD openstack port create port-normal-qos --network net0 --qos-policy policy0", "openstack server create --flavor cirros256 --image cirros-0.3.5-x86_64-disk --nic port-id=port-normal-qos --wait qos_instance", "openstack server remove port <vm_name|vm_id> <port_name|port_id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/proc_creating-an-instance-with-a-guaranteed-min-bw-qos_osp
Chapter 4. Using AMQ Management Console
Chapter 4. Using AMQ Management Console AMQ Management Console is a web console included in the AMQ Broker installation that enables you to use a web browser to manage AMQ Broker. AMQ Management Console is based on hawtio . 4.1. Overview AMQ Broker is a full-featured, message-oriented middleware broker. It offers specialized queueing behaviors, message persistence, and manageability. It supports multiple protocols and client languages, freeing you to use many of your application assets. AMQ Broker's key features allow you to: monitor your AMQ brokers and clients view the topology view network health at a glance manage AMQ brokers using: AMQ Management Console Command-line Interface (CLI) Management API The supported web browsers for AMQ Management Console are Firefox and Chrome. For more information on supported browser versions, see AMQ 7 Supported Configurations . 4.2. Configuring local and remote access to AMQ Management Console The procedure in this section shows how to configure local and remote access to AMQ Management Console. Remote access to the console can take one of two forms: Within a console session on a local broker, you use the Connect tab to connect to another, remote broker From a remote host, you connect to the console for the local broker, using an externally-reachable IP address for the local broker Prerequisites You must upgrade to at least AMQ Broker 7.1.0. As part of this upgrade, an access-management configuration file named jolokia-access.xml is added to the broker instance. For more information about upgrading, see Upgrading a Broker instance from 7.0.x to 7.1.0 . Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. Within the web element, observe that the web port is bound only to localhost by default. <web path="web"> <binding uri="http://localhost:8161"> <app url="redhat-branding" war="redhat-branding.war"/> <app url="artemis-plugin" war="artemis-plugin.war"/> <app url="dispatch-hawtio-console" war="dispatch-hawtio-console.war"/> <app url="console" war="console.war"/> </binding> </web> To enable connection to the console for the local broker from a remote host, change the web port binding to a network-reachable interface. For example: <web path="web"> <binding uri="http://0.0.0.0:8161"> In the preceding example, by specifying 0.0.0.0 , you bind the web port to all interfaces on the local broker. Save the bootstrap.xml file. Open the <broker_instance_dir> /etc/jolokia-access.xml file. Within the <cors> (that is, Cross-Origin Resource Sharing ) element, add an allow-origin entry for each HTTP origin request header that you want to allow to access the console. For example: <cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors> In the preceding configuration, you specify that the following connections are allowed: Connection from the local host (that is, the host machine for your local broker instance) to the console. The first asterisk ( * ) wildcard character allows either the http or https scheme to be specified in the connection request, based on whether you have configured the console for secure connections. The second asterisk wildcard character allows any port on the host machine to be used for the connection. Connection from a remote host to the console for the local broker, using the externally-reachable IP address of the local broker. In this case, the externally-reachable IP address of the local broker is 192.168.0.49 . Connection from within a console session opened on another, remote broker to the local broker. In this case, the IP address of the remote broker is 192.168.0.51 . Save the jolokia-access.xml file. Open the <broker_instance_dir> /etc/artemis.profile file. To enable the Connect tab in the console, set the value of the Dhawtio.disableProxy argument to false . -Dhawtio.disableProxy=false Important It is recommended that you enable remote connections from the console (that is, set the value of the Dhawtio.disableProxy argument to false ) only if the console is exposed to a secure network. Add a new argument, Dhawtio.proxyWhitelist , to the JAVA_ARGS list of Java system arguments. As a comma-separated list, specify IP addresses for any remote brokers that you want to connect to from the local broker (that is, by using the Connect tab within a console session running on the local broker). For example: -Dhawtio.proxyWhitelist=192.168.0.51 Based on the preceding configuration, you can use the Connect tab within a console session on the local broker to connect to another, remote broker with an IP address of 192.168.0.51 . Save the aretmis.profile file. Additional resources To learn how to access the console, see Section 4.3, "Accessing AMQ Management Console" . For more information about: Cross-Origin Resource Sharing, see W3C Recommendations . Jolokia security, see Jolokia Protocols . Securing connections to the console, see Section 4.4.3, "Securing network access to AMQ Management Console" . 4.3. Accessing AMQ Management Console The procedure in this section shows how to: Open AMQ Management Console from the local broker Connect to other brokers from within a console session on the local broker Open a console instance for the local broker from a remote host using the externally-reachable IP address of the local broker Prerequisites You must have already configured local and remote access to the console. For more information, see Section 4.2, "Configuring local and remote access to AMQ Management Console" . Procedure In your web browser, navigate to the console address for the local broker. The console address is http:// <host:port> /console/login . If you are using the default address, navigate to http://localhost:8161/console/login . Otherwise, use the values of host and port that are defined for the bind attribute of the web element in the <broker_instance_dir> /etc/bootstrap.xml configuration file. Figure 4.1. Console login page Log in to AMQ Management Console using the default user name and password that you created when you created the broker. To connect to another, remote broker from the console session of the local broker: In the left menu, click the Connect tab. In the main pane, on the Remote tab, click the Add connection button. In the Add Connection dialog box, specify the following details: Name Name for the remote connection, for example, my_other_broker . Scheme Protocol to use for the remote connection. Select http for a non-secured connection, or https for a secured connection. Host IP address of a remote broker. You must have already configured console access for this remote broker. Port Port on the local broker to use for the remote connection. Specify the port value that is defined for the bind attribute of the web element in the <broker_instance_dir> /etc/bootstrap.xml configuration file. The default value is 8161 . Path Path to use for console access. Specify console/jolokia . To test the connection, click the Test Connection button. If the connection test is successful, click the Add button. If the connection test fails, review and modify the connection details as needed. Test the connection again. On the Remote page, for a connection that you have added, click the Connect button. A new web browser tab opens for the console instance on the remote broker. In the Log In dialog box, enter the user name and password for the remote broker. Click Log In . The console instance for the remote broker opens. To connect to the console for the local broker from a remote host, specify the Jolokia endpoint for the local broker in a web browser. This endpoint includes the externally-reachable IP address that you specified for the local broker when configuring remote console access. For example: 4.4. Configuring AMQ Management Console Configure user access and request access to resources on the broker. 4.4.1. Securing AMQ Management Console using Red Hat Single Sign-On Prerequisites Red Hat Single Sign-On 7.4 Procedure Configure Red Hat Single Sign-On: Navigate to the realm in Red Hat Single Sign-On that you want to use for securing AMQ Management Console. Each realm in Red Hat Single Sign-On includes a client named Broker . This client is not related to AMQ. Create a new client in Red Hat Single Sign-On, for example artemis-console . Navigate to the client settings page and set: Valid Redirect URIs to the AMQ Management Console URL followed by * , for example: Web Origins to the same value as Valid Redirect URIs . Red Hat Single Sign-On allows you enter + , indicating that allowed CORS origins includes the value for Valid Redirect URIs . Create a role for the client, for example guest . Make sure all users who require access to AMQ Management Console are assigned the above role, for example, using Red Hat Single Sign-On groups. Configure the AMQ Broker instance: Add the following to your <broker-instance-dir> /instances/broker0/etc/login.config file to configure AMQ Management Console to use Red Hat Single Sign-On: Adding this configuration sets up a JAAS principal and a requirement for a bearer token from Red Hat Single Sign-On. The connection to Red Hat Single Sign-On is defined in the keycloak-bearer-token.json file, as described in the step. Create a file <broker-instance-dir> /etc/keycloak-bearer-token.json with the following contents to specify the connection to Red Hat Single Sign-On used for the bearer token exchange: { "realm": " <realm-name> ", "resource": " <client-name> ", "auth-server-url": " <RHSSO-URL> /auth", "principal-attribute": "preferred_username", "use-resource-role-mappings": true, "ssl-required": "external", "confidential-port": 0 } <realm-name> the name of the realm in Red Hat Single Sign-On <client-name> the name of the client in Red Hat Single Sign-On <RHSSO-URL> the URL of Red Hat Single Sign-On Create a file <broker-instance-dir> /etc/keycloak-js-token.json with the following contents to specify the Red Hat Single Sign-On authentication endpoint: { "realm": "<realm-name>", "clientId": "<client-name>", "url": " <RHSSO-URL> /auth" } Configure the security settings by editing the <broker-instance-dir> /etc/broker.xml file. For example, to allow users with the amq role consume messages and allow users with the guest role send messages, add the following: <security-setting match="Info"> <permission roles="amq" type="createDurableQueue"/> <permission roles="amq" type="deleteDurableQueue"/> <permission roles="amq" type="createNonDurableQueue"/> <permission roles="amq" type="deleteNonDurableQueue"/> <permission roles="guest" type="send"/> <permission roles="amq" type="consume"/> </security-setting> Run the AMQ Broker instance and validate AMQ Management Console configuration. 4.4.2. Setting up user access to AMQ Management Console You can access AMQ Management Console using the broker login credentials. The following table provides information about different methods to add additional broker users to access AMQ Management Console: Authentication Method Description Guest authentication Enables anonymous access. In this configuration, any user who connects without credentials or with the wrong credentials will be authenticated automatically and assigned a specific user and role. For more information, see Configuring guest access in Configuring AMQ Broker . Basic user and password authentication For each user, you must define a username and password and assign a security role. Users can only log into AMQ Management Console using these credentials. For more information, see Configuring basic user and password authentication in Configuring AMQ Broker . LDAP authentication Users are authenticated and authorized by checking the credentials against user data stored in a central X.500 directory server. For more information, see Configuring LDAP to authenticate clients in Configuring AMQ Broker . 4.4.3. Securing network access to AMQ Management Console To secure AMQ Management Console when the console is being accessed over a WAN or the internet, use SSL to specify that network access uses https instead of http . Prerequisites The following should be located in the <broker_instance_dir> /etc/ directory: Java key store Java trust store (needed only if you require client authentication) Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. In the <web> element, add the following attributes: <web path="web"> <binding uri="https://0.0.0.0:8161" keyStorePath="<path_to_keystore>" keyStorePassword="<password>" clientAuth="<true/false>" trustStorePath="<path_to_truststore>" trustStorePassword="<password>"> </binding> </web> bind For secure connections to the console, change the URI scheme to https . keyStorePath Path of the keystore file. For example: keyStorePath=" <broker_instance_dir> /etc/keystore.jks" keyStorePassword Key store password. This password can be encrypted. clientAuth Specifies whether client authentication is required. The default value is false . trustStorePath Path of the trust store file. You need to define this attribute only if clientAuth is set to true . trustStorePassword Trust store password. This password can be encrypted. Additional resources For more information about encrypting passwords in broker configuration files, including bootstrap.xml , see Encrypting Passwords in Configuration Files . 4.4.4. Configuring AMQ Management Console to use certificate-based authentication You can configure AMQ Management Console to authenticate users by using certificates instead of passwords. Procedure Obtain certificates for the broker and clients from a trusted certificate authority or generate self-signed certificates. If you want to generate self-signed certificates, complete the following steps: Generate a self-signed certificate for the broker. USD keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg "RSA" -keysize 2048 -dname "CN=ActiveMQ Broker, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ" -ext bc=ca:false -ext eku=cA Export the certificate from the broker keystore, so that it can be shared with clients. USD keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -alias client -exportcert -rfc > broker.crt On the client, import the broker certificate into the client truststore. USD keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file broker.crt -noprompt On the client, generate a self-signed certificate for the client. USD keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg "RSA" -keysize 2048 -dname "CN=ActiveMQ Client, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ" -ext bc=ca:false -ext eku=cA Export the client certificate from the client keystore to a file so that it can be added to the broker truststore. USD keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -alias client -exportcert -rfc > client.crt Import the client certificate into the broker truststore. USD keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file client.crt -noprompt Note On the broker machine, ensure that the keystore and truststore files are in a location that is accessible to the broker. In the <broker_instance_dir>/etc/bootstrap.xml file, update the web configuration to enable the HTTPS protocol and client authentication for the broker console. For example: ... <web path="web"> <binding uri="https://localhost:8161" keyStorePath="USD{artemis.instance}/etc/server-keystore.p12" keyStorePassword="password" clientAuth="true" trustStorePath="USD{artemis.instance}/etc/client-truststore.p12" trustStorePassword="password"> ... </binding> </web> ... binding uri Specify the https protocol to enable SSL and add a host name and port. keystorePath The path to the keystore where the broker certificate is installed. keystorePassword The password of the keystore where the broker certificate is installed. ClientAuth Set to true to configure the broker to require that each client presents a certificate when a client tries to connect to the broker console. trustStorePath If clients are using self-signed certificates, specify the path to the truststore where client certificates are installed. trustStorePassword If clients are using self-signed certificates, specify the password of the truststore where client certificates are installed . NOTE. You need to configure the trustStorePath and trustStorePassword properties only if clients are using self-signed certificates. Obtain the Subject Distinguished Names (DNs) from each client certificate so you can create a mapping between each client certificate and a broker user. Export each client certificate from the client's keystore file into a temporary file. For example: Print the contents of the exported certificate: The output is similar to that shown below: The Owner entry is the Subject DN. The format used to enter the Subject DN depends on your platform. The string above could also be represented as; Enable certificate-based authentication for the broker's console. Open the <broker_instance_dir> /etc/login.config configuration file. Add the certificate login module and reference the user and roles properties files. For example: activemq { org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule debug=true org.apache.activemq.jaas.textfiledn.user="artemis-users.properties" org.apache.activemq.jaas.textfiledn.role="artemis-roles.properties"; }; org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule The implementation class. org.apache.activemq.jaas.textfiledn.user Specifies the location of the user properties file relative to the directory that contains the login configuration file. org.apache.activemq.jaas.textfiledn.role Specifies the properties file that maps users to defined roles for the login module implementation. Note If you change the default name of the certificate login module configuration in the <broker_instance_dir> /etc/login.config file, you must update the value of the -dhawtio.realm argument in the <broker_instance_dir>/etc/artemis.profile file to match the new name. The default name is activemq . Open the <broker_instance_dir>/etc/artemis-users.properties file. Create a mapping between client certificates and broker users by adding the Subject DNS that you obtained from each client certificate to a broker user. For example: user1=CN=user1,O=Progress,C=US user2=CN=user2,O=Progress,C=US In this example, the user1 broker user is mapped to the client certificate that has a Subject Distinguished Name of CN=user1,O=Progress,C=US Subject DN. After you create a mapping between a client certificate and a broker user, the broker can authenticate the user by using the certificate. Open the <broker_instance_dir>/etc/artemis-roles.properties file. Grant users permission to log in to the console by adding them to the role that is specified for the HAWTIO_ROLE variable in the <broker_instance_dir>/etc/artemis.profile file. The default value of the HAWTIO_ROLE variable is amq . For example: amq=user1, user2 Configure the following recommended security properties for the HTTPS protocol. Open the <broker_instance_dir>/etc/artemis.profile file. Set the hawtio.http.strictTransportSecurity property to allow only HTTPS requests to the AMQ Management Console and to convert any HTTP requests to HTTPS. For example: hawtio.http.strictTransportSecurity = max-age=31536000; includeSubDomains; preload Set the hawtio.http.publicKeyPins property to instruct the web browser to associate a specific cryptographic public key with the AMQ Management Console to decrease the risk of "man-in-the-middle" attacks using forged certificates. For example: hawtio.http.publicKeyPins = pin-sha256="..."; max-age=5184000; includeSubDomains 4.5. Managing brokers using AMQ Management Console You can use AMQ Management Console to view information about a running broker and manage the following resources: Incoming network connections (acceptors) Addresses Queues 4.5.1. Viewing details about the broker To see how the broker is configured, in the left menu, click Artemis . In the folder tree, the local broker is selected by default. In the main pane, the following tabs are available: Status Displays information about the current status of the broker, such as uptime and cluster information. Also displays the amount of address memory that the broker is currently using. The graph shows this value as a proportion of the global-max-size configuration parameter. Figure 4.2. Status tab Connections Displays information about broker connections, including client, cluster, and bridge connections. Sessions Displays information about all sessions currently open on the broker. Consumers Displays information about all consumers currently open on the broker. Producers Displays information about producers currently open on the broker. Addresses Displays information about addresses on the broker. This includes internal addresses, such as store-and-forward addresses. Queues Displays information about queues on the broker. This includes internal queues, such as store-and-forward queues. Attributes Displays detailed information about attributes configured on the broker. Operations Displays JMX operations that you can execute on the broker from the console. When you click an operation, a dialog box opens that enables you to specify parameter values for the operation. Chart Displays real-time data for attributes configured on the broker. You can edit the chart to specify the attributes that are included in the chart. Broker diagram Displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker. 4.5.2. Viewing the broker diagram You can view a diagram of all AMQ Broker resources in your topology, including brokers (live and backup brokers), producers and consumers, addresses, and queues. Procedure In the left menu, click Artemis . In the main pane, click the Broker diagram tab. The console displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker, as shown in the figure. Figure 4.3. Broker diagram tab To change what items are displayed on the diagram, use the check boxes at the top of the diagram. Click Refresh . To show attributes for the local broker or an address or queue that is connected to it, click that node in the diagram. For example, the following figure shows a diagram that also includes attributes for the local broker. Figure 4.4. Broker diagram tab, including attributes 4.5.3. Viewing acceptors You can view details about the acceptors configured for the broker. Procedure In the left menu, click Artemis . In the folder tree, click acceptors . To view details about how an acceptor is configured, click the acceptor. The console shows the corresponding attributes on the Attributes tab, as shown in the figure. Figure 4.5. AMQP acceptor attributes To see complete details for an attribute, click the attribute. An additional window opens to show the details. 4.5.4. Managing addresses and queues An address represents a messaging endpoint. Within the configuration, a typical address is given a unique name. A queue is associated with an address. There can be multiple queues per address. Once an incoming message is matched to an address, the message is sent on to one or more of its queues, depending on the routing type configured. Queues can be configured to be automatically created and deleted. 4.5.4.1. Creating addresses A typical address is given a unique name, zero or more queues, and a routing type. A routing type determines how messages are sent to the queues associated with an address. Addresses can be configured with two different routing types. If you want your messages routed to... Use this routing type... A single queue within the matching address, in a point-to-point manner. Anycast Every queue within the matching address, in a publish-subscribe manner. Multicast You can create and configure addresses and queues, and then delete them when they are no longer in use. Procedure In the left menu, click Artemis . In the folder tree, click addresses . In the main pane, click the Create address tab. A page appears for you to create an address, as shown in the figure. Figure 4.6. Create Address page Complete the following fields: Address name The routing name of the address. Routing type Select one of the following options: Multicast : Messages sent to the address will be distributed to all subscribers in a publish-subscribe manner. Anycast : Messages sent to this address will be distributed to only one subscriber in a point-to-point manner. Both : Enables you to define more than one routing type per address. This typically results in an anti-pattern and is not recommended. Note If an address does use both routing types, and the client does not show a preference for either one, the broker defaults to the anycast routing type. The one exception is when the client uses the MQTT protocol. In that case, the default routing type is multicast . Click Create Address . 4.5.4.2. Sending messages to an address The following procedure shows how to use the console to send a message to an address. Procedure In the left menu, click Artemis . In the folder tree, select an address. On the navigation bar in the main pane, click More Send message . A page appears for you to create a message, as shown in the figure. Figure 4.7. Send Message page If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and then click Send message . 4.5.4.3. Creating queues Queues provide a channel between a producer and a consumer. Prerequisites The address to which you want to bind the queue must exist. To learn how to use the console to create an address, see Section 4.5.4.1, "Creating addresses" . Procedure In the left menu, click Artemis . In the folder tree, select the address to which you want to bind the queue. In the main pane, click the Create queue tab. A page appears for you to create a queue, as shown in the figure. Figure 4.8. Create Queue page Complete the following fields: Queue name A unique name for the queue. Routing type Select one of the following options: Multicast : Messages sent to the parent address will be distributed to all queues bound to the address. Anycast : Only one queue bound to the parent address will receive a copy of the message. Messages will be distributed evenly among all of the queues bound to the address. Durable If you select this option, the queue and its messages will be persistent. Filter The username to be used when connecting to the broker. Max Consumers The maximum number of consumers that can access the queue at a given time. Purge when no consumers If selected, the queue will be purged when no consumers are connected. Click Create Queue . 4.5.4.4. Checking the status of a queue Charts provide a real-time view of the status of a queue on a broker. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Chart tab. The console displays a chart that shows real-time data for all of the queue attributes. Figure 4.9. Chart tab for a queue Note To view a chart for multiple queues on an address, select the anycast or multicast folder that contains the queues. If necessary, select different criteria for the chart: In the main pane, click Edit . On the Attributes list, select one or more attributes that you want to include in the chart. To select multiple attributes, press and hold the Ctrl key and select each attribute. Click the View Chart button. The chart is updated based on the attributes that you selected. 4.5.4.5. Browsing queues Browsing a queue displays all of the messages in the queue. You can also filter and sort the list to find specific messages. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. Queues are located within the addresses to which they are bound. On the navigation bar in the main pane, click More Browse queue . The messages in the queue are displayed. By default, the first 200 messages are displayed. Figure 4.10. Browse Queue page To browse for a specific message or group of messages, do one of the following: To... Do this... Filter the list of messages In the Filter... text field, enter filter criteria. Click the search (that is, magnifying glass) icon. Sort the list of messages In the list of messages, click a column header. To sort the messages in descending order, click the header a second time. To view the content of a message, click the Show button. You can view the message header, properties, and body. 4.5.4.6. Sending messages to a queue After creating a queue, you can send a message to it. The following procedure outlines the steps required to send a message to an existing queue. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Send message tab. A page appears for you to compose the message. Figure 4.11. Send Message page for a queue If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and click Send message . 4.5.4.7. Resending messages to a queue You can resend previously sent messages. Procedure Browse for the message you want to resend . Click the check box to the message that you want to resend. Click the Resend button. The message is displayed. Update the message header and body as needed, and then click Send message . 4.5.4.8. Moving messages to a different queue You can move one or more messages in a queue to a different queue. Procedure Browse for the messages you want to move . Click the check box to each message that you want to move. In the navigation bar, click Move Messages . A confirmation dialog box appears. From the drop-down menu, select the name of the queue to which you want to move the messages. Click Move . 4.5.4.9. Deleting messages or queues You can delete a queue or purge all of the messages from a queue. Procedure Browse for the queue you want to delete or purge . Do one of the following: To... Do this... Delete a message from the queue Click the check box to each message that you want to delete. Click the Delete button. Purge all messages from the queue On the navigation bar in the main pane, click Delete queue . Click the Purge Queue button. Delete the queue On the navigation bar in the main pane, click Delete queue . Click the Delete Queue button.
[ "<web path=\"web\"> <binding uri=\"http://localhost:8161\"> <app url=\"redhat-branding\" war=\"redhat-branding.war\"/> <app url=\"artemis-plugin\" war=\"artemis-plugin.war\"/> <app url=\"dispatch-hawtio-console\" war=\"dispatch-hawtio-console.war\"/> <app url=\"console\" war=\"console.war\"/> </binding> </web>", "<web path=\"web\"> <binding uri=\"http://0.0.0.0:8161\">", "<cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors>", "-Dhawtio.disableProxy=false", "-Dhawtio.proxyWhitelist=192.168.0.51", "http://192.168.0.49/console/jolokia", "https://broker.example.com:8161/console/*", "console { org.keycloak.adapters.jaas.BearerTokenLoginModule required keycloak-config-file=\"USD{artemis.instance}/etc/keycloak-bearer-token.json\" role-principal-class=org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal ; };", "{ \"realm\": \" <realm-name> \", \"resource\": \" <client-name> \", \"auth-server-url\": \" <RHSSO-URL> /auth\", \"principal-attribute\": \"preferred_username\", \"use-resource-role-mappings\": true, \"ssl-required\": \"external\", \"confidential-port\": 0 }", "{ \"realm\": \"<realm-name>\", \"clientId\": \"<client-name>\", \"url\": \" <RHSSO-URL> /auth\" }", "<security-setting match=\"Info\"> <permission roles=\"amq\" type=\"createDurableQueue\"/> <permission roles=\"amq\" type=\"deleteDurableQueue\"/> <permission roles=\"amq\" type=\"createNonDurableQueue\"/> <permission roles=\"amq\" type=\"deleteNonDurableQueue\"/> <permission roles=\"guest\" type=\"send\"/> <permission roles=\"amq\" type=\"consume\"/> </security-setting>", "<web path=\"web\"> <binding uri=\"https://0.0.0.0:8161\" keyStorePath=\"<path_to_keystore>\" keyStorePassword=\"<password>\" clientAuth=\"<true/false>\" trustStorePath=\"<path_to_truststore>\" trustStorePassword=\"<password>\"> </binding> </web>", "keyStorePath=\" <broker_instance_dir> /etc/keystore.jks\"", "keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg \"RSA\" -keysize 2048 -dname \"CN=ActiveMQ Broker, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ\" -ext bc=ca:false -ext eku=cA", "keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -alias client -exportcert -rfc > broker.crt", "keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file broker.crt -noprompt", "keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg \"RSA\" -keysize 2048 -dname \"CN=ActiveMQ Client, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ\" -ext bc=ca:false -ext eku=cA", "keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -alias client -exportcert -rfc > client.crt", "keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file client.crt -noprompt", "<web path=\"web\"> <binding uri=\"https://localhost:8161\" keyStorePath=\"USD{artemis.instance}/etc/server-keystore.p12\" keyStorePassword=\"password\" clientAuth=\"true\" trustStorePath=\"USD{artemis.instance}/etc/client-truststore.p12\" trustStorePassword=\"password\"> </binding> </web>", "keytool -export -file <file_name> -alias broker-localhost -keystore broker.ks -storepass <password>", "keytool -printcert -file <file_name>", "Owner: CN=AMQ Client, OU=Artemis, O=AMQ, L=AMQ, ST=AMQ, C=AMQ Issuer: CN=AMQ Client, OU=Artemis, O=AMQ, L=AMQ, ST=AMQ, C=AMQ Serial number: 51461f5d Valid from: Sun Apr 17 12:20:14 IST 2022 until: Sat Jul 16 12:20:14 IST 2022 Certificate fingerprints: SHA1: EC:94:13:16:04:93:57:4F:FD:CA:AD:D8:32:68:A4:13:CC:EA:7A:67 SHA256: 85:7F:D5:4A:69:80:3B:5B:86:27:99:A7:97:B8:E4:E8:7D:6F:D1:53:08:D8:7A:BA:A7:0A:7A:96:F3:6B:98:81", "Owner: `CN=localhost,\\ OU=broker,\\ O=Unknown,\\ L=Unknown,\\ ST=Unknown,\\ C=Unknown`", "activemq { org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule debug=true org.apache.activemq.jaas.textfiledn.user=\"artemis-users.properties\" org.apache.activemq.jaas.textfiledn.role=\"artemis-roles.properties\"; };", "user1=CN=user1,O=Progress,C=US user2=CN=user2,O=Progress,C=US", "amq=user1, user2", "hawtio.http.strictTransportSecurity = max-age=31536000; includeSubDomains; preload", "hawtio.http.publicKeyPins = pin-sha256=\"...\"; max-age=5184000; includeSubDomains" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/managing_amq_broker/assembly-using-AMQ-console-managing
5.292. sanlock
5.292. sanlock 5.292.1. RHEA-2012:0996 - sanlock enhancement update Updated sanlock packages that add multiple enhancements are now available for Red Hat Enterprise Linux 6. The sanlock packages provide a shared disk lock manager that uses disk paxos to manage leases on shared storage. Hosts connected to a common Storage Area Network (SAN) can use sanlock to synchronize the access to the shared disks. Both libvirt and vdsm can use sanlock to synchronize access to shared virtual machine (VM) images. The sanlock packages have been upgraded to the latest upstream version, which provides a number of enhancements over the version. (BZ# 782600 ) All users of sanlock are advised to upgrade to these updated packages, which add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/sanlock
Chapter 23. MVEL
Chapter 23. MVEL Overview MVEL is a Java-based dynamic language that is similar to OGNL, but is reported to be much faster. The MVEL support is in the camel-mvel module. Syntax You use the MVEL dot syntax to invoke Java methods, for example: Because MVEL is dynamically typed, it is unnecessary to cast the message body instance (of Object type) before invoking the getFamilyName() method. You can also use an abbreviated syntax for invoking bean attributes, for example: Adding the MVEL module To use MVEL in your routes you need to add a dependency on camel-mvel to your project as shown in Example 23.1, "Adding the camel-mvel dependency" . Example 23.1. Adding the camel-mvel dependency Built-in variables Table 23.1, "MVEL variables" lists the built-in variables that are accessible when using MVEL. Table 23.1. MVEL variables Name Type Description this org.apache.camel.Exchange The current Exchange exchange org.apache.camel.Exchange The current Exchange exception Throwable the Exchange exception (if any) exchangeID String the Exchange ID fault org.apache.camel.Message The Fault message(if any) request org.apache.camel.Message The IN message response org.apache.camel.Message The OUT message properties Map The Exchange properties property( name ) Object The value of the named Exchange property property( name , type ) Type The typed value of the named Exchange property Example Example 23.2, "Route using MVEL" shows a route that uses MVEL. Example 23.2. Route using MVEL
[ "getRequest().getBody().getFamilyName()", "request.body.familyName", "<!-- Maven POM File --> <properties> <camel-version>2.23.2.fuse-7_13_0-00013-redhat-00001</camel-version> </properties> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mvel</artifactId> <version>USD{camel-version}</version> </dependency> </dependencies>", "<camelContext> <route> <from uri=\"seda:foo\"/> <filter> <language langauge=\"mvel\">request.headers.foo == 'bar'</language> <to uri=\"seda:bar\"/> </filter> </route> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/MVEL
Chapter 1. Administering and Maintaining the Red Hat Virtualization Environment
Chapter 1. Administering and Maintaining the Red Hat Virtualization Environment The Red Hat Virtualization environment requires an administrator to keep it running. As an administrator, your tasks include: Managing physical and virtual resources such as hosts and virtual machines. This includes upgrading and adding hosts, importing domains, converting virtual machines created on foreign hypervisors, and managing virtual machine pools. Monitoring the overall system resources for potential problems such as extreme load on one of the hosts, insufficient memory or disk space, and taking any necessary actions (such as migrating virtual machines to other hosts to lessen the load or freeing resources by shutting down machines). Responding to the new requirements of virtual machines (for example, upgrading the operating system or allocating more memory). Managing customized object properties using tags. Managing searches saved as public bookmarks . Managing user setup and setting permission levels. Troubleshooting for specific users or virtual machines for overall system functionality. Generating general and specific reports. 1.1. Global Configuration Accessed by clicking Administration Configure , the Configure window allows you to configure a number of global resources for your Red Hat Virtualization environment, such as users, roles, system permissions, scheduling policies, instance types, and MAC address pools. This window allows you to customize the way in which users interact with resources in the environment, and provides a central location for configuring options that can be applied to multiple clusters. 1.1.1. Roles Roles are predefined sets of privileges that can be configured from Red Hat Virtualization Manager. Roles provide access and management permissions to different levels of resources in the data center, and to specific physical and virtual resources. With multilevel administration, any permissions which apply to a container object also apply to all individual objects within that container. For example, when a host administrator role is assigned to a user on a specific host, the user gains permissions to perform any of the available host operations, but only on the assigned host. However, if the host administrator role is assigned to a user on a data center, the user gains permissions to perform host operations on all hosts within the cluster of the data center. 1.1.1.1. Creating a New Role If the role you require is not on Red Hat Virtualization's default list of roles, you can create a new role and customize it to suit your purposes. Procedure Click Administration Configure . This opens the Configure window. The Roles tab is selected by default, showing a list of default User and Administrator roles, and any custom roles. Click New . Enter the Name and Description of the new role. Select either Admin or User as the Account Type . Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects in the Check Boxes to Allow Action list. You can also expand or collapse the options for each object. For each of the objects, select or clear the actions you want to permit or deny for the role you are setting up. Click OK to apply the changes. The new role displays on the list of roles. 1.1.1.2. Editing or Copying a Role You can change the settings for roles you have created, but you cannot change default roles. To change default roles, clone and modify them to suit your requirements. Procedure Click Administration Configure . This opens the Configure window, which shows a list of default User and Administrator roles, as well as any custom roles. Select the role you wish to change. Click Edit or Copy . This opens the Edit Role or Copy Role window. If necessary, edit the Name and Description of the role. Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects. You can also expand or collapse the options for each object. For each of the objects, select or clear the actions you wish to permit or deny for the role you are editing. Click OK to apply the changes you have made. 1.1.1.3. User Role and Authorization Examples The following examples illustrate how to apply authorization controls for various scenarios, using the different features of the authorization system described in this chapter. Example 1.1. Cluster Permissions Sarah is the system administrator for the accounts department of a company. All the virtual resources for her department are organized under a Red Hat Virtualization cluster called Accounts . She is assigned the ClusterAdmin role on the accounts cluster. This enables her to manage all virtual machines in the cluster, since the virtual machines are child objects of the cluster. Managing the virtual machines includes editing, adding, or removing virtual resources such as disks, and taking snapshots. It does not allow her to manage any resources outside this cluster. Because ClusterAdmin is an administrator role, it allows her to use the Administration Portal or the VM Portal to manage these resources. Example 1.2. VM PowerUser Permissions John is a software developer in the accounts department. He uses virtual machines to build and test his software. Sarah has created a virtual desktop called johndesktop for him. John is assigned the UserVmManager role on the johndesktop virtual machine. This allows him to access this single virtual machine using the VM Portal. Because he has UserVmManager permissions, he can modify the virtual machine. Because UserVmManager is a user role, it does not allow him to use the Administration Portal. Example 1.3. Data Center Power User Role Permissions Penelope is an office manager. In addition to her own responsibilities, she occasionally helps the HR manager with recruitment tasks, such as scheduling interviews and following up on reference checks. As per corporate policy, Penelope needs to use a particular application for recruitment tasks. While Penelope has her own machine for office management tasks, she wants to create a separate virtual machine to run the recruitment application. She is assigned PowerUserRole permissions for the data center in which her new virtual machine will reside. This is because to create a new virtual machine, she needs to make changes to several components within the data center, including creating the virtual disk in the storage domain. Note that this is not the same as assigning DataCenterAdmin privileges to Penelope. As a PowerUser for a data center, Penelope can log in to the VM Portal and perform virtual machine-specific actions on virtual machines within the data center. She cannot perform data center-level operations such as attaching hosts or storage to a data center. Example 1.4. Network Administrator Permissions Chris works as the network administrator in the IT department. Her day-to-day responsibilities include creating, manipulating, and removing networks in the department's Red Hat Virtualization environment. For her role, she requires administrative privileges on the resources and on the networks of each resource. For example, if Chris has NetworkAdmin privileges on the IT department's data center, she can add and remove networks in the data center, and attach and detach networks for all virtual machines belonging to the data center. Example 1.5. Custom Role Permissions Rachel works in the IT department, and is responsible for managing user accounts in Red Hat Virtualization. She needs permission to add user accounts and assign them the appropriate roles and permissions. She does not use any virtual machines herself, and should not have access to administration of hosts, virtual machines, clusters or data centers. There is no built-in role which provides her with this specific set of permissions. A custom role must be created to define the set of permissions appropriate to Rachel's position. Figure 1.1. UserManager Custom Role The UserManager custom role shown above allows manipulation of users, permissions and roles. These actions are organized under System - the top level object of the hierarchy shown in Object Hierarchy . This means they apply to all other objects in the system. The role is set to have an Account Type of Admin . This means that when she is assigned this role, Rachel can use both the Administration Portal and the VM Portal. 1.1.2. System Permissions Permissions enable users to perform actions on objects, where objects are either individual objects or container objects. Any permissions that apply to a container object also apply to all members of that container. Figure 1.2. Permissions & Roles Figure 1.3. Red Hat Virtualization Object Hierarchy 1.1.2.1. User Properties Roles and permissions are the properties of the user. Roles are predefined sets of privileges that permit access to different levels of physical and virtual resources. Multilevel administration provides a finely grained hierarchy of permissions. For example, a data center administrator has permissions to manage all objects in the data center, while a host administrator has system administrator permissions to a single physical host. A user can have permissions to use a single virtual machine but not make any changes to the virtual machine configurations, while another user can be assigned system permissions to a virtual machine. 1.1.2.2. User and Administrator Roles Red Hat Virtualization provides a range of pre-configured roles, from an administrator with system-wide permissions to an end user with access to a single virtual machine. While you cannot change or remove the default roles, you can clone and customize them, or create new roles according to your requirements. There are two types of roles: Administrator Role: Allows access to the Administration Portal for managing physical and virtual resources. An administrator role confers permissions for actions to be performed in the VM Portal; however, it has no bearing on what a user can see in the VM Portal. User Role: Allows access to the VM Portal for managing and accessing virtual machines and templates. A user role determines what a user can see in the VM Portal. Permissions granted to a user with an administrator role are reflected in the actions available to that user in the VM Portal. 1.1.2.3. User Roles Explained The table below describes basic user roles which confer permissions to access and configure virtual machines in the VM Portal. Table 1.1. Red Hat Virtualization User Roles - Basic Role Privileges Notes UserRole Can access and use virtual machines and pools. Can log in to the VM Portal, use assigned virtual machines and pools, view virtual machine state and details. PowerUserRole Can create and manage virtual machines and templates. Apply this role to a user for the whole environment with the Configure window, or for specific data centers or clusters. For example, if a PowerUserRole is applied on a data center level, the PowerUser can create virtual machines and templates in the data center. UserVmManager System administrator of a virtual machine. Can manage virtual machines and create and use snapshots. A user who creates a virtual machine in the VM Portal is automatically assigned the UserVmManager role on the machine. The table below describes advanced user roles which allow you to do more fine tuning of permissions for resources in the VM Portal. Table 1.2. Red Hat Virtualization User Roles - Advanced Role Privileges Notes UserTemplateBasedVm Limited privileges to only use Templates. Can use templates to create virtual machines. DiskOperator Virtual disk user. Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached. VmCreator Can create virtual machines in the VM Portal. This role is not applied to a specific virtual machine; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers or clusters. When applying this role to a cluster, you must also apply the DiskCreator role on an entire data center, or on specific storage domains. TemplateCreator Can create, edit, manage and remove virtual machine templates within assigned resources. This role is not applied to a specific template; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains. DiskCreator Can create, edit, manage and remove virtual disks within assigned clusters or data centers. This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers or storage domains. TemplateOwner Can edit and delete the template, assign and manage user permissions for the template. This role is automatically assigned to the user who creates a template. Other users who do not have TemplateOwner permissions on a template cannot view or use the template. VnicProfileUser Logical network and network interface user for virtual machine and template. Can attach or detach network interfaces from specific logical networks. 1.1.2.4. Administrator Roles Explained The table below describes basic administrator roles which confer permissions to access and configure resources in the Administration Portal. Table 1.3. Red Hat Virtualization System Administrator Roles - Basic Role Privileges Notes SuperUser System Administrator of the Red Hat Virtualization environment. Has full permissions across all objects and levels, can manage all objects across all data centers. ClusterAdmin Cluster Administrator. Possesses administrative permissions for all objects underneath a specific cluster. DataCenterAdmin Data Center Administrator. Possesses administrative permissions for all objects underneath a specific data center except for storage. Important Do not use the administrative user for the directory server as the Red Hat Virtualization administrative user. Create a user in the directory server specifically for use as the Red Hat Virtualization administrative user. The table below describes advanced administrator roles which allow you to do more fine tuning of permissions for resources in the Administration Portal. Table 1.4. Red Hat Virtualization System Administrator Roles - Advanced Role Privileges Notes TemplateAdmin Administrator of a virtual machine template. Can create, delete, and configure the storage domains and network details of templates, and move templates between domains. StorageAdmin Storage Administrator. Can create, delete, configure, and manage an assigned storage domain. HostAdmin Host Administrator. Can attach, remove, configure, and manage a specific host. NetworkAdmin Network Administrator. Can configure and manage the network of a particular data center or cluster. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster. VmPoolAdmin System Administrator of a virtual pool. Can create, delete, and configure a virtual pool; assign and remove virtual pool users; and perform basic operations on a virtual machine in the pool. GlusterAdmin Gluster Storage Administrator. Can create, delete, configure, and manage Gluster storage volumes. VmImporterExporter Import and export Administrator of a virtual machine. Can import and export virtual machines. Able to view all virtual machines and templates exported by other users. 1.1.2.5. Assigning an Administrator or User Role to a Resource Assign administrator or user roles to resources to allow users to access or manage that resource. Procedure Find and click the resource's name. This opens the details view. Click the Permissions tab to list the assigned users, each user's role, and the inherited permissions for the selected resource. Click Add . Enter the name or user name of an existing user into the Search text box and click Go . Select a user from the resulting list of possible matches. Select a role from the Role to Assign drop-down list. Click OK . The user now has the inherited permissions of that role enabled for that resource. Important Avoid assigning global permissions to regular users on resources such as clusters because permissions are automatically inherited by resources that are lower in a system's hierarchy. Set UserRole and all other user role permissions on specific resources such as virtual machines, pools or virtual machine pools, especially the latter. Assigning global permissions can cause two problems due to the inheritance of permissions: A regular user can automatically be granted permission to control virtual machine pools, even if the administrator assigning permissions did not intend for this to happen. The virtual machine portal might behave unexpectedly with pools. Therefore, it is strongly recommended to set UserRole and all other user role permissions on specific resources only, especially virtual machine pool resources, and not on resources from which other resources inherit permissions. 1.1.2.6. Removing an Administrator or User Role from a Resource Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource. Procedure Find and click the resource's name. This opens the details view. Click the Permissions tab to list the assigned users, the user's role, and the inherited permissions for the selected resource. Select the user to remove from the resource. Click Remove . Click OK . 1.1.2.7. Managing System Permissions for a Data Center As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A data center administrator is a system administration role for a specific data center only. This is useful in virtualization environments with multiple data centers where each data center requires an administrator. The DataCenterAdmin role is a hierarchical model; a user assigned the data center administrator role for a data center can manage all objects in the data center with the exception of storage for that data center. Use the Configure button in the header bar to assign a data center administrator for all data centers in the environment. The data center administrator role permits the following actions: Create and remove clusters associated with the data center. Add and remove hosts, virtual machines, and pools associated with the data center. Edit user permissions for virtual machines associated with the data center. Note You can only assign roles and permissions to existing users. You can change the system administrator of a data center by removing the existing system administrator and adding the new system administrator. 1.1.2.8. Data Center Administrator Roles Explained Data Center Permission Roles The table below describes the administrator roles and privileges applicable to data center administration. Table 1.5. Red Hat Virtualization System Administrator Roles Role Privileges Notes DataCenterAdmin Data Center Administrator Can use, create, delete, manage all physical and virtual resources within a specific data center except for storage, including clusters, hosts, templates and virtual machines. NetworkAdmin Network Administrator Can configure and manage the network of a particular data center. A network administrator of a data center inherits network permissions for virtual machines within the data center as well. 1.1.2.9. Managing System Permissions for a Cluster As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A cluster administrator is a system administration role for a specific cluster only. This is useful in data centers with multiple clusters, where each cluster requires a system administrator. The ClusterAdmin role is a hierarchical model: a user assigned the cluster administrator role for a cluster can manage all objects in the cluster. Use the Configure button in the header bar to assign a cluster administrator for all clusters in the environment. The cluster administrator role permits the following actions: Create and remove associated clusters. Add and remove hosts, virtual machines, and pools associated with the cluster. Edit user permissions for virtual machines associated with the cluster. Note You can only assign roles and permissions to existing users. You can also change the system administrator of a cluster by removing the existing system administrator and adding the new system administrator. 1.1.2.10. Cluster Administrator Roles Explained Cluster Permission Roles The table below describes the administrator roles and privileges applicable to cluster administration. Table 1.6. Red Hat Virtualization System Administrator Roles Role Privileges Notes ClusterAdmin Cluster Administrator Can use, create, delete, manage all physical and virtual resources in a specific cluster, including hosts, templates and virtual machines. Can configure network properties within the cluster such as designating display networks, or marking a network as required or non-required. However, a ClusterAdmin does not have permissions to attach or detach networks from a cluster, to do so NetworkAdmin permissions are required. NetworkAdmin Network Administrator Can configure and manage the network of a particular cluster. A network administrator of a cluster inherits network permissions for virtual machines within the cluster as well. 1.1.2.11. Managing System Permissions for a Network As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A network administrator is a system administration role that can be applied for a specific network, or for all networks on a data center, cluster, host, virtual machine, or template. A network user can perform limited administration roles, such as viewing and attaching networks on a specific virtual machine or template. You can use the Configure button in the header bar to assign a network administrator for all networks in the environment. The network administrator role permits the following actions: Create, edit and remove networks. Edit the configuration of the network, including configuring port mirroring. Attach and detach networks from resources including clusters and virtual machines. The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. You can also change the administrator of a network by removing the existing administrator and adding the new administrator. 1.1.2.12. Network Administrator and User Roles Explained Network Permission Roles The table below describes the administrator and user roles and privileges applicable to network administration. Table 1.7. Red Hat Virtualization Network Administrator and User Roles Role Privileges Notes NetworkAdmin Network Administrator for data center, cluster, host, virtual machine, or template. The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. Can configure and manage the network of a particular data center, cluster, host, virtual machine, or template. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster. To configure port mirroring on a virtual machine network, apply the NetworkAdmin role on the network and the UserVmManager role on the virtual machine. VnicProfileUser Logical network and network interface user for virtual machine and template. Can attach or detach network interfaces from specific logical networks. 1.1.2.13. Managing System Permissions for a Host As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A host administrator is a system administration role for a specific host only. This is useful in clusters with multiple hosts, where each host requires a system administrator. You can use the Configure button in the header bar to assign a host administrator for all hosts in the environment. The host administrator role permits the following actions: Edit the configuration of the host. Set up the logical networks. Remove the host. You can also change the system administrator of a host by removing the existing system administrator and adding the new system administrator. 1.1.2.14. Host Administrator Roles Explained Host Permission Roles The table below describes the administrator roles and privileges applicable to host administration. Table 1.8. Red Hat Virtualization System Administrator Roles Role Privileges Notes HostAdmin Host Administrator Can configure, manage, and remove a specific host. Can also perform network-related operations on a specific host. 1.1.2.15. Managing System Permissions for a Storage Domain As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A storage administrator is a system administration role for a specific storage domain only. This is useful in data centers with multiple storage domains, where each storage domain requires a system administrator. Use the Configure button in the header bar to assign a storage administrator for all storage domains in the environment. The storage domain administrator role permits the following actions: Edit the configuration of the storage domain. Move the storage domain into maintenance mode. Remove the storage domain. Note You can only assign roles and permissions to existing users. You can also change the system administrator of a storage domain by removing the existing system administrator and adding the new system administrator. 1.1.2.16. Storage Administrator Roles Explained Storage Domain Permission Roles The table below describes the administrator roles and privileges applicable to storage domain administration. Table 1.9. Red Hat Virtualization System Administrator Roles Role Privileges Notes StorageAdmin Storage Administrator Can create, delete, configure and manage a specific storage domain. GlusterAdmin Gluster Storage Administrator Can create, delete, configure and manage Gluster storage volumes. 1.1.2.17. Managing System Permissions for a Virtual Machine Pool As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A virtual machine pool administrator is a system administration role for virtual machine pools in a data center. This role can be applied to specific virtual machine pools, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual machine pool resources. The virtual machine pool administrator role permits the following actions: Create, edit, and remove pools. Add and detach virtual machines from the pool. Note You can only assign roles and permissions to existing users. 1.1.2.18. Virtual Machine Pool Administrator Roles Explained Pool Permission Roles The table below describes the administrator roles and privileges applicable to pool administration. Table 1.10. Red Hat Virtualization System Administrator Roles Role Privileges Notes VmPoolAdmin System Administrator role of a virtual pool. Can create, delete, and configure a virtual pool, assign and remove virtual pool users, and perform basic operations on a virtual machine. ClusterAdmin Cluster Administrator Can use, create, delete, manage all virtual machine pools in a specific cluster. 1.1.2.19. Managing System Permissions for a Virtual Disk As the SuperUser , the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. Red Hat Virtualization Manager provides two default virtual disk user roles, but no default virtual disk administrator roles. One of these user roles, the DiskCreator role, enables the administration of virtual disks from the VM Portal. This role can be applied to specific virtual machines, to a data center, to a specific storage domain, or to the whole virtualized environment; this is useful to allow different users to manage different virtual resources. The virtual disk creator role permits the following actions: Create, edit, and remove virtual disks associated with a virtual machine or other resources. Edit user permissions for virtual disks. Note You can only assign roles and permissions to existing users. 1.1.2.20. Virtual Disk User Roles Explained Virtual Disk User Permission Roles The table below describes the user roles and privileges applicable to using and administrating virtual disks in the VM Portal. Table 1.11. Red Hat Virtualization System Administrator Roles Role Privileges Notes DiskOperator Virtual disk user. Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached. DiskCreator Can create, edit, manage and remove virtual disks within assigned clusters or data centers. This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains. 1.1.2.20.1. Setting a Legacy SPICE Cipher SPICE consoles use FIPS-compliant encryption by default, with a cipher string. The default SPICE cipher string is: kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL This string is generally sufficient. However, if you have a virtual machine with an older operating system or SPICE client, where either one or the other does not support FIPS-compliant encryption, you must use a weaker cipher string. Otherwise, a connection security error may occur if you install a new cluster or a new host in an existing cluster and try to connect to that virtual machine. You can change the cipher string by using an Ansible playbook. Changing the cipher string On the Manager machine, create a file in the directory /usr/share/ovirt-engine/playbooks . For example: # vim /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml Enter the following in the file and save it: name: oVirt - setup weaker SPICE encryption for old clients hosts: hostname vars: host_deploy_spice_cipher_string: 'DEFAULT:-RC4:-3DES:-DES' roles: - ovirt-host-deploy-spice-encryption Run the file you just created: # ansible-playbook -l hostname /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml Alternatively, you can reconfigure the host with the Ansible playbook ovirt-host-deploy using the --extra-vars option with the variable host_deploy_spice_cipher_string : # ansible-playbook -l hostname \ --extra-vars host_deploy_spice_cipher_string="DEFAULT:-RC4:-3DES:-DES" \ /usr/share/ovirt-engine/playbooks/ovirt-host-deploy.yml 1.1.3. Scheduling Policies A scheduling policy is a set of rules that defines the logic by which virtual machines are distributed amongst hosts in the cluster that scheduling policy is applied to. Scheduling policies determine this logic via a combination of filters, weightings, and a load balancing policy. The filter modules apply hard enforcement and filter out hosts that do not meet the conditions specified by that filter. The weights modules apply soft enforcement, and are used to control the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run. The Red Hat Virtualization Manager provides five default scheduling policies: Evenly_Distributed , Cluster_Maintenance , None , Power_Saving , and VM_Evenly_Distributed . You can also define new scheduling policies that provide fine-grained control over the distribution of virtual machines. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host's CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See Scheduling Policies in the Administration Guide for more information about the properties of each scheduling policy. For detailed information about how scheduling policies work, see How does cluster scheduling policy work? . Figure 1.4. Evenly Distributed Scheduling Policy The Evenly_Distributed scheduling policy distributes the memory and CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes , HighUtilization , VCpuToPhysicalCpuRatio , or MaxFreeMemoryForOverUtilized . The VM_Evenly_Distributed scheduling policy distributes virtual machines evenly between hosts based on a count of the virtual machines. The cluster is considered unbalanced if any host is running more virtual machines than the HighVmCount and there is at least one host with a virtual machine count that falls outside of the MigrationThreshold . Figure 1.5. Power Saving Scheduling Policy The Power_Saving scheduling policy distributes the memory and CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value. Set the None policy to have no load or power sharing between hosts for running virtual machines. This is the default mode. When a virtual machine is started, the memory and CPU processing load is spread evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes , HighUtilization , or MaxFreeMemoryForOverUtilized . The Cluster_Maintenance scheduling policy limits activity in a cluster during maintenance tasks. When the Cluster_Maintenance policy is set, no new virtual machines may be started, except highly available virtual machines. If host failure occurs, highly available virtual machines will restart properly and any virtual machine can migrate. 1.1.3.1. Creating a Scheduling Policy You can create new scheduling policies to control the logic by which virtual machines are distributed amongst a given cluster in your Red Hat Virtualization environment. Procedure Click Administration Configure . Click the Scheduling Policies tab. Click New . Enter a Name and Description for the scheduling policy. Configure filter modules: In the Filter Modules section, drag and drop the preferred filter modules to apply to the scheduling policy from the Disabled Filters section into the Enabled Filters section. Specific filter modules can also be set as the First , to be given highest priority, or Last , to be given lowest priority, for basic optimization. To set the priority, right-click any filter module, hover the cursor over Position and select First or Last . Configure weight modules: In the Weights Modules section, drag and drop the preferred weights modules to apply to the scheduling policy from the Disabled Weights section into the Enabled Weights & Factors section. Use the + and - buttons to the left of the enabled weight modules to increase or decrease the weight of those modules. Specify a load balancing policy: From the drop-down menu in the Load Balancer section, select the load balancing policy to apply to the scheduling policy. From the drop-down menu in the Properties section, select a load balancing property to apply to the scheduling policy and use the text field to the right of that property to specify a value. Use the + and - buttons to add or remove additional properties. Click OK . 1.1.3.2. Explanation of Settings in the New Scheduling Policy and Edit Scheduling Policy Window The following table details the options available in the New Scheduling Policy and Edit Scheduling Policy windows. Table 1.12. New Scheduling Policy and Edit Scheduling Policy Settings Field Name Description Name The name of the scheduling policy. This is the name used to refer to the scheduling policy in the Red Hat Virtualization Manager. Description A description of the scheduling policy. This field is recommended but not mandatory. Filter Modules A set of filters for controlling the hosts on which a virtual machine in a cluster can run. Enabling a filter will filter out hosts that do not meet the conditions specified by that filter, as outlined below: ClusterInMaintenance : Virtual machines being started on the host that are not configured for high availability filter out the host. CpuPinning : Hosts which do not satisfy the CPU pinning definition. Migration : Prevents migration to the same host. CPUOverloaded : Hosts with CPU usage that is above the defined HighUtilization threshold for the interval defined by the CpuOverCommitDurationMinutes . PinToHost : Hosts other than the host to which the virtual machine is pinned. CPU-Level : Hosts that do not meet the CPU topology of the virtual machine. VmAffinityGroups : Hosts that do not meet the affinity rules defined for the virtual machine. NUMA : Hosts that do not have NUMA nodes that can accommodate the virtual machine vNUMA nodes in terms of resources. InClusterUpgrade : Hosts that are running an earlier version of the operating system than the host that the virtual machine currently runs on. MDevice : Hosts that do not provide the required mediated device (mDev). Memory : Hosts that do not have sufficient memory to run the virtual machine. CPU : Hosts with fewer CPUs than the number assigned to the virtual machine. HostedEnginesSpares : Reserves space for the Manager virtual machine on a specified number of self-hosted engine nodes. Swap : Hosts that are not swapping within the threshold. VM leases ready : Hosts that do not support virtual machines configured with storage leases. VmToHostsAffinityGroups : Group of hosts that do not meet the conditions specified for a virtual machine that is a member of an affinity group. For example, that virtual machines in an affinity group must run on one of the hosts in a group or on a separate host that is excluded from the group. HostDevice : Hosts that do not support host devices required by the virtual machine. HA : Forces the Manager virtual machine in a self-hosted engine environment to run only on hosts with a positive high availability score. Emulated-Machine : Hosts which do not have proper emulated machine support. HugePages : Hosts that do not meet the required number of Huge Pages needed for the virtual machine's memory. Migration-Tsc-Frequency : Hosts that do not have virtual machines with the same TSC frequency as the host currently running the virtual machine. Network : Hosts on which networks required by the network interface controller of a virtual machine are not installed, or on which the cluster's display network is not installed. Label : Hosts that do not have the required affinity labels. Compatibility-Version : Hosts that do not have the correct cluster compatibility version support. Weights Modules A set of weightings for controlling the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run. VmAffinityGroups : Weights hosts in accordance with the affinity groups defined for virtual machines. This weight module determines how likely virtual machines in an affinity group are to run on the same host or on separate hosts in accordance with the parameters of that affinity group. InClusterUpgrade : Weight hosts in accordance with their operating system version. The weight penalizes hosts with earlier operating systems more than hosts with the same operating system as the host that the virtual machine is currently running on. This ensures that priority is always given to hosts with later operating systems. OptimalForCpuEvenDistribution : Weights hosts in accordance with their CPU usage, giving priority to hosts with lower CPU usage. CPU for high performance VMs : Prefers hosts that have more or an equal number of sockets, cores and threads than the VM. HA : Weights hosts in accordance with their high availability score. OptimalForCpuPowerSaving : Weights hosts in accordance with their CPU usage, giving priority to hosts with higher CPU usage. OptimalForMemoryPowerSaving : Weights hosts in accordance with their memory usage, giving priority to hosts with lower available memory. CPU and NUMA pinning compatibility : Weights hosts in accordance to pinning compatibility. When a virtual machine has both vNUMA and pinning defined, this weight module gives preference to hosts whose CPU pinning does not clash with the vNUMA pinning. VmToHostsAffinityGroups : Weights hosts in accordance with the affinity groups defined for virtual machines. This weight module determines how likely virtual machines in an affinity group are to run on one of the hosts in a group or on a separate host that is excluded from the group. OptimalForEvenGuestDistribution : Weights hosts in accordance with the number of virtual machines running on those hosts. OptimalForHaReservation : Weights hosts in accordance with their high availability score. OptimalForMemoryEvenDistribution : Weights hosts in accordance with their memory usage, giving priority to hosts with higher available memory. Fit VM to single host NUMA node : Weights hosts in accordance to whether a virtual machine fits into a single NUMA node. When a virtual machine does not have vNUMA defined, this weight module gives preference to hosts that can fit the virtual machine into a single physical NUMA. PreferredHosts : Preferred hosts have priority during virtual machine setup. Load Balancer This drop-down menu allows you to select a load balancing module to apply. Load balancing modules determine the logic used to migrate virtual machines from hosts experiencing high usage to hosts experiencing lower usage. Properties This drop-down menu allows you to add or remove properties for load balancing modules, and is only available when you have selected a load balancing module for the scheduling policy. No properties are defined by default, and the properties that are available are specific to the load balancing module that is selected. Use the + and - buttons to add or remove additional properties to or from the load balancing module. 1.1.4. Instance Types Instance types can be used to define the hardware configuration of a virtual machine. Selecting an instance type when creating or editing a virtual machine will automatically fill in the hardware configuration fields. This allows users to create multiple virtual machines with the same hardware configuration without having to manually fill in every field. Note Support for instance types is now deprecated, and will be removed in a future release. A set of predefined instance types are available by default, as outlined in the following table: Table 1.13. Predefined Instance Types Name Memory vCPUs Tiny 512 MB 1 Small 2 GB 1 Medium 4 GB 2 Large 8 GB 2 XLarge 16 GB 4 Administrators can also create, edit, and remove instance types from the Instance Types tab of the Configure window. Fields in the New Virtual Machine and Edit Virtual Machine windows that are bound to an instance type have a chain link image to them ( ). If the value of one of these fields is changed, the virtual machine will be detached from the instance type, changing to Custom , and the chain will appear broken ( ). However, if the value is changed back, the chain will relink and the instance type will move back to the selected one. 1.1.4.1. Creating Instance Types Administrators can create new instance types, which can then be selected by users when creating or editing virtual machines. Procedure Click Administration Configure . Click the Instance Types tab. Click New . Enter a Name and Description for the instance type. Click Show Advanced Options and configure the instance type's settings as required. The settings that appear in the New Instance Type window are identical to those in the New Virtual Machine window, but with the relevant fields only. See Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows in the Virtual Machine Management Guide . Click OK . The new instance type will appear in the Instance Types tab in the Configure window, and can be selected from the Instance Type drop-down list when creating or editing a virtual machine. 1.1.4.2. Editing Instance Types Administrators can edit existing instance types from the Configure window. Procedure Click Administration Configure . Click the Instance Types tab. Select the instance type to be edited. Click Edit . Change the settings as required. Click OK . The configuration of the instance type is updated. When a new virtual machine based on this instance type is created, or when an existing virtual machine based on this instance type is updated, the new configuration is applied. Existing virtual machines based on this instance type will display fields, marked with a chain icon, that will be updated. If the existing virtual machines were running when the instance type was changed, the orange Pending Changes icon will appear beside them and the fields with the chain icon will be updated at the restart. 1.1.4.3. Removing Instance Types Procedure Click Administration Configure . Click the Instance Types tab. Select the instance type to be removed. Click Remove . If any virtual machines are based on the instance type to be removed, a warning window listing the attached virtual machines will appear. To continue removing the instance type, select the Approve Operation check box. Otherwise click Cancel . Click OK . The instance type is removed from the Instance Types list and can no longer be used when creating a new virtual machine. Any virtual machines that were attached to the removed instance type will now be attached to Custom (no instance type). 1.1.5. MAC Address Pools MAC address pools define the range(s) of MAC addresses allocated for each cluster. A MAC address pool is specified for each cluster. By using MAC address pools, Red Hat Virtualization can automatically generate and assign MAC addresses to new virtual network devices, which helps to prevent MAC address duplication. MAC address pools are more memory efficient when all MAC addresses related to a cluster are within the range for the assigned MAC address pool. The same MAC address pool can be shared by multiple clusters, but each cluster has a single MAC address pool assigned. A default MAC address pool is created by Red Hat Virtualization and is used if another MAC address pool is not assigned. For more information about assigning MAC address pools to clusters see Creating a New Cluster . Note If more than one Red Hat Virtualization cluster shares a network, do not rely solely on the default MAC address pool because the virtual machines of each cluster will try to use the same range of MAC addresses, leading to conflicts. To avoid MAC address conflicts, check the MAC address pool ranges to ensure that each cluster is assigned a unique MAC address range. The MAC address pool assigns the available MAC address following the last address that was returned to the pool. If there are no further addresses left in the range, the search starts again from the beginning of the range. If there are multiple MAC address ranges with available MAC addresses defined in a single MAC address pool, the ranges take turns in serving incoming requests in the same way available MAC addresses are selected. 1.1.5.1. Creating MAC Address Pools You can create new MAC address pools. Procedure Click Administration Configure . Click the MAC Address Pools tab. Click Add . Enter the Name and Description of the new MAC address pool. Select the Allow Duplicates check box to allow a MAC address to be used multiple times in a pool. The MAC address pool will not automatically use a duplicate MAC address, but enabling the duplicates option means a user can manually use a duplicate MAC address. Note If one MAC address pool has duplicates disabled, and another has duplicates enabled, each MAC address can be used once in the pool with duplicates disabled but can be used multiple times in the pool with duplicates enabled. Enter the required MAC Address Ranges . To enter multiple ranges click the plus button to the From and To fields. Click OK . 1.1.5.2. Editing MAC Address Pools You can edit MAC address pools to change the details, including the range of MAC addresses available in the pool and whether duplicates are allowed. Procedure Click Administration Configure . Click the MAC Address Pools tab. Select the MAC address pool to be edited. Click Edit . Change the Name , Description , Allow Duplicates , and MAC Address Ranges fields as required. Note When a MAC address range is updated, the MAC addresses of existing NICs are not reassigned. MAC addresses that were already assigned, but are outside of the new MAC address range, are added as user-specified MAC addresses and are still tracked by that MAC address pool. Click OK . 1.1.5.3. Editing MAC Address Pool Permissions After a MAC address pool has been created, you can edit its user permissions. The user permissions control which data centers can use the MAC address pool. See Roles for more information on adding new user permissions. Procedure Click Administration Configure . Click the MAC Address Pools tab. Select the required MAC address pool. Edit the user permissions for the MAC address pool: To add user permissions to a MAC address pool: Click Add in the user permissions pane at the bottom of the Configure window. Search for and select the required users. Select the required role from the Role to Assign drop-down list. Click OK to add the user permissions. To remove user permissions from a MAC address pool: Select the user permission to be removed in the user permissions pane at the bottom of the Configure window. Click Remove to remove the user permissions. 1.1.5.4. Removing MAC Address Pools You can remove a created MAC address pool if the pool is not associated with a cluster, but the default MAC address pool cannot be removed. Procedure Click Administration Configure . Click the MAC Address Pools tab. Select the MAC address pool to be removed. Click the Remove . Click OK .
[ "vim /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml", "name: oVirt - setup weaker SPICE encryption for old clients hosts: hostname vars: host_deploy_spice_cipher_string: 'DEFAULT:-RC4:-3DES:-DES' roles: - ovirt-host-deploy-spice-encryption", "ansible-playbook -l hostname /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml", "ansible-playbook -l hostname --extra-vars host_deploy_spice_cipher_string=\"DEFAULT:-RC4:-3DES:-DES\" /usr/share/ovirt-engine/playbooks/ovirt-host-deploy.yml" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/part-Administering_and_Maintaining_the_Red_Hat_Virtualization_Environment
Chapter 5. Installing a cluster on IBM Power Virtual Server into an existing VPC
Chapter 5. Installing a cluster on IBM Power Virtual Server into an existing VPC In OpenShift Container Platform version 4.14, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud(R). The installation program provisions the rest of the required infrastructure, which you can then further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Important IBM Power(R) Virtual Server using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 5.2. About using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster using an existing IBM(R) Virtual Private Cloud (VPC). Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 5.2.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 5.2.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 5.2.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 5.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 5.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 5.7.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 9 vpcSubnets: 10 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceID: "powervs-region-service-instance-id" credentialsMode: Manual publish: External 11 pullSecret: '{"auths": ...}' 12 fips: false sshKey: ssh-ed25519 AAAA... 13 1 4 If you do not provide these parameters and values, the installation program provides the default value. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections currently define a single machine pool. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 7 The machine CIDR must contain the subnets for the compute machines and control plane machines. 8 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 9 Specify the name of an existing VPC. 10 Specify the name of the existing VPC subnet. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 11 How to publish the user-facing endpoints of your cluster. 12 Required. The installation program prompts you for this value. 13 Provide the sshKey value that you use to access the machines in your cluster. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 5.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 5.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 5.13. steps Customize your cluster Optional: Opt out of remote health reporting
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 9 vpcSubnets: 10 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceID: \"powervs-region-service-instance-id\" credentialsMode: Manual publish: External 11 pullSecret: '{\"auths\": ...}' 12 fips: false sshKey: ssh-ed25519 AAAA... 13", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_power_virtual_server/installing-ibm-powervs-vpc
function::sprint_stack
function::sprint_stack Name function::sprint_stack - Return stack for kernel addresses from string Synopsis Arguments stk String with list of hexadecimal (kernel) addresses Description Perform a symbolic lookup of the addresses in the given string, which is assumed to be the result of a prior call to backtrace . Returns a simple backtrace from the given hex string. One line per address. Includes the symbol name (or hex address if symbol couldn't be resolved) and module name (if found). Includes the offset from the start of the function if found, otherwise the offset will be added to the module (if found, between brackets). Returns the backtrace as string (each line terminated by a newline character). Note that the returned stack will be truncated to MAXSTRINGLEN, to print fuller and richer stacks use print_stack. NOTE it is recommended to use sprint_syms instead of this function.
[ "sprint_stack:string(stk:string)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sprint-stack
Deploying installer-provisioned clusters on bare metal
Deploying installer-provisioned clusters on bare metal OpenShift Container Platform 4.16 Deploying installer-provisioned OpenShift Container Platform clusters on bare metal Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/deploying_installer-provisioned_clusters_on_bare_metal/index
Appendix B. Troubleshooting: Solutions to Specific Problems
Appendix B. Troubleshooting: Solutions to Specific Problems For troubleshooting advice for: Servers, see Section B.1, "Identity Management Servers" Replicas, see Section B.2, "Identity Management Replicas" Clients, see Section B.3, "Identity Management Clients" Authentication, see Section B.4, "Logging In and Authentication Problems" Vaults, see Section B.5, "Vaults" B.1. Identity Management Servers B.1.1. External CA Installation Fails The ipa-server-install --external-ca command fails with the following error: The env|grep proxy command displays variables such as the following: What this means: The *_proxy environmental variables are preventing the server from being installed. To fix the problem: Use the following shell script to unset the *_proxy environmental variables: Run the pkidestroy utility to remove the unsuccessful CA subsystem installation: Remove the failed IdM server installation: Retry running ipa-server-install --external-ca . B.1.2. named Daemon Fails to Start After installing an IdM server with integrated DNS, the named-pkcs11 fails to start. The /var/log/messages file includes an error message related to the named-pkcs11 service and the ldap.so library: What this means: The bind-chroot package is installed and is preventing the named-pkcs11 service from starting. To fix the problem: Uninstall the bind-chroot package. Restart the IdM server. B.1.3. Installing a Server Fails on a System with IPv6 Disabled When attempting to install an IdM server on a system with IPv6 disabled, the following error occurs during the installation process: What this means: Installing and running a server requires IPv6 to be enabled on the network. See Section 2.1.3, "System Requirements" . To fix the problem: Enable IPv6 on your system. For details, see How do I disable or enable the IPv6 protocol in Red Hat Enterprise Linux? in Red Hat Knowledgebase. Note that IPv6 is enabled by default on Red Hat Enterprise Linux 7 systems.
[ "ipa : CRITICAL failed to configure ca instance Command '/usr/sbin/pkispawn -s CA -f /tmp/ configuration_file ' returned non-zero exit status 1 Configuration of CA failed", "env|grep proxy http_proxy=http://example.com:8080 ftp_proxy=http://example.com:8080 https_proxy=http://example.com:8080", "for i in ftp http https; do unset USD{i}_proxy; done", "pkidestroy -s CA -i pki-tomcat; rm -rf /var/log/pki/pki-tomcat /etc/sysconfig/pki-tomcat /etc/sysconfig/pki/tomcat/pki-tomcat /var/lib/pki/pki-tomcat /etc/pki/pki-tomcat /root/ipa.csr", "ipa-server-install --uninstall", "ipaserver named[6886]: failed to dynamically load driver 'ldap.so': libldap-2.4.so.2: cannot open shared object file: No such file or directory", "yum remove bind-chroot", "ipactl restart", "CRITICAL Failed to restart the directory server Command '/bin/systemctl restart [email protected]' returned non-zero exit status 1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/trouble-specific
Chapter 14. ImageTagMirrorSet [config.openshift.io/v1]
Chapter 14. ImageTagMirrorSet [config.openshift.io/v1] Description ImageTagMirrorSet holds cluster-wide information about how to handle registry mirror rules on using tag pull specification. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 14.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status contains the observed state of the resource. 14.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description imageTagMirrors array imageTagMirrors allows images referenced by image tags in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in imageTagMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To use mirrors to pull images using digest specification only, users should configure a list of mirrors using "ImageDigestMirrorSet" CRD. If the image pull specification matches the repository of "source" in multiple imagetagmirrorset objects, only the objects which define the most specific namespace match will be used. For example, if there are objects using quay.io/libpod and quay.io/libpod/busybox as the "source", only the objects using quay.io/libpod/busybox are going to apply for pull specification quay.io/libpod/busybox. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Users who want to use a deterministic order of mirrors, should configure them into one list of mirrors using the expected order. imageTagMirrors[] object ImageTagMirrors holds cluster-wide information about how to handle mirrors in the registries config. 14.1.2. .spec.imageTagMirrors Description imageTagMirrors allows images referenced by image tags in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in imageTagMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. To use mirrors to pull images using digest specification only, users should configure a list of mirrors using "ImageDigestMirrorSet" CRD. If the image pull specification matches the repository of "source" in multiple imagetagmirrorset objects, only the objects which define the most specific namespace match will be used. For example, if there are objects using quay.io/libpod and quay.io/libpod/busybox as the "source", only the objects using quay.io/libpod/busybox are going to apply for pull specification quay.io/libpod/busybox. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. If the "mirrors" is not specified, the image will continue to be pulled from the specified repository in the pull spec. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Users who want to use a deterministic order of mirrors, should configure them into one list of mirrors using the expected order. Type array 14.1.3. .spec.imageTagMirrors[] Description ImageTagMirrors holds cluster-wide information about how to handle mirrors in the registries config. Type object Required source Property Type Description mirrorSourcePolicy string mirrorSourcePolicy defines the fallback policy if fails to pull image from the mirrors. If unset, the image will continue to be pulled from the repository in the pull spec. sourcePolicy is valid configuration only when one or more mirrors are in the mirror list. mirrors array (string) mirrors is zero or more locations that may also contain the same images. No mirror will be configured if not specified. Images can be pulled from these mirrors only if they are referenced by their tags. The mirrored location is obtained by replacing the part of the input reference that matches source by the mirrors entry, e.g. for registry.redhat.io/product/repo reference, a (source, mirror) pair *.redhat.io, mirror.local/redhat causes a mirror.local/redhat/product/repo repository to be used. Pulling images by tag can potentially yield different images, depending on which endpoint we pull from. Configuring a list of mirrors using "ImageDigestMirrorSet" CRD and forcing digest-pulls for mirrors avoids that issue. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. If no mirror is specified or all image pulls from the mirror list fail, the image will continue to be pulled from the repository in the pull spec unless explicitly prohibited by "mirrorSourcePolicy". Other cluster configuration, including (but not limited to) other imageTagMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. "mirrors" uses one of the following formats: host[:port] host[:port]/namespace[/namespace...] host[:port]/namespace[/namespace...]/repo for more information about the format, see the document about the location field: https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#choosing-a-registry-toml-table source string source matches the repository that users refer to, e.g. in image pull specifications. Setting source to a registry hostname e.g. docker.io. quay.io, or registry.redhat.io, will match the image pull specification of corressponding registry. "source" uses one of the following formats: host[:port] host[:port]/namespace[/namespace...] host[:port]/namespace[/namespace...]/repo [*.]host for more information about the format, see the document about the location field: https://github.com/containers/image/blob/main/docs/containers-registries.conf.5.md#choosing-a-registry-toml-table 14.1.4. .status Description status contains the observed state of the resource. Type object 14.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/imagetagmirrorsets DELETE : delete collection of ImageTagMirrorSet GET : list objects of kind ImageTagMirrorSet POST : create an ImageTagMirrorSet /apis/config.openshift.io/v1/imagetagmirrorsets/{name} DELETE : delete an ImageTagMirrorSet GET : read the specified ImageTagMirrorSet PATCH : partially update the specified ImageTagMirrorSet PUT : replace the specified ImageTagMirrorSet /apis/config.openshift.io/v1/imagetagmirrorsets/{name}/status GET : read status of the specified ImageTagMirrorSet PATCH : partially update status of the specified ImageTagMirrorSet PUT : replace status of the specified ImageTagMirrorSet 14.2.1. /apis/config.openshift.io/v1/imagetagmirrorsets HTTP method DELETE Description delete collection of ImageTagMirrorSet Table 14.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageTagMirrorSet Table 14.2. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSetList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageTagMirrorSet Table 14.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.4. Body parameters Parameter Type Description body ImageTagMirrorSet schema Table 14.5. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 201 - Created ImageTagMirrorSet schema 202 - Accepted ImageTagMirrorSet schema 401 - Unauthorized Empty 14.2.2. /apis/config.openshift.io/v1/imagetagmirrorsets/{name} Table 14.6. Global path parameters Parameter Type Description name string name of the ImageTagMirrorSet HTTP method DELETE Description delete an ImageTagMirrorSet Table 14.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 14.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageTagMirrorSet Table 14.9. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageTagMirrorSet Table 14.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.11. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageTagMirrorSet Table 14.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.13. Body parameters Parameter Type Description body ImageTagMirrorSet schema Table 14.14. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 201 - Created ImageTagMirrorSet schema 401 - Unauthorized Empty 14.2.3. /apis/config.openshift.io/v1/imagetagmirrorsets/{name}/status Table 14.15. Global path parameters Parameter Type Description name string name of the ImageTagMirrorSet HTTP method GET Description read status of the specified ImageTagMirrorSet Table 14.16. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageTagMirrorSet Table 14.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.18. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageTagMirrorSet Table 14.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.20. Body parameters Parameter Type Description body ImageTagMirrorSet schema Table 14.21. HTTP responses HTTP code Reponse body 200 - OK ImageTagMirrorSet schema 201 - Created ImageTagMirrorSet schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/imagetagmirrorset-config-openshift-io-v1
Chapter 4. Configuring Messaging Destinations
Chapter 4. Configuring Messaging Destinations Note Remember, configuring messaging destinations requires JBoss EAP to have messaging enabled. This functionality is enabled by default when running with the standalone-full.xml or standalone-full-ha.xml configuration files. The domain.xml configuration file also has messaging enabled. 4.1. Adding a Queue To add a Jakarta Messaging queue, use the jms-queue command from the management CLI: Note how the entries attribute is a list containing multiple JNDI names separated by a single space. Also note the use of square brackets, [] , to enclose the list of JNDI names. The queue-address provides routing configuration, and entries provides a list of JNDI names that clients can use to look up the queue. Reading a Queue's Attributes You can read a queue's configuration using the jms-queue command in the management CLI. Alternatively, you can read a queue's configuration by accessing the messaging-activemq subsystem using the management CLI: Attributes of a jms-queue The management CLI displays all the attributes of the jms-queue configuration element when given the following command: The table below provides all the attributes of a jms-queue : Attribute Description consumer-count The number of consumers consuming messages from this queue. Available at runtime. dead-letter-address The address to send dead messages to. See Configuring Dead Letter Addresses for more information. delivering-count The number of messages that this queue is currently delivering to its consumers. Available at runtime. durable Whether the queue is durable or not. See Messaging Styles for more information on durable subscriptions. entries The list of JNDI names the queue will be bound to. Required. expiry-address The address that will receive expired messages. See Configuring Message Expiry for details. legacy-entries The JNDI names the queue will be bound to. message-count The number of messages currently in this queue. Available at runtime. messages-added The number of messages added to this queue since it was created. Available at runtime. paused Whether the queue is paused. Available at runtime. queue-address The queue address defines what address is used for routing messages. See Configuring Address Settings for details on address settings. Required. scheduled-count The number of scheduled messages in this queue. Available at runtime. selector The queue selector. For more information on selectors see Filter Expressions and Message Selectors . temporary Whether the queue is temporary. See Temporary Queues and Runtime Queues for more information. 4.2. Adding a Topic Adding or reading a topic is much like adding a queue: Reading a Topic's Attributes Reading topic attributes also has syntax similar to that used for a queue: Attributes of a jms-topic The management CLI displays all the attributes of the jms-topic configuration element when given the following command: The table below lists the attributes of a jms-topic : Attribute Description delivering-count The number of messages that this queue is currently delivering to its consumers. Available at runtime. durable-message-count The number of messages for all durable subscribers for this topic. Available at runtime. durable-subscription-count The number of durable subscribers for this topic. Available at runtime. entries The JNDI names the topic will be bound to. Required. legacy-entries The legacy JNDI names the topic will be bound to. message-count The number of messages currently in this queue. Available at runtime. messages-added The number of messages added to this queue since it was created. Available at runtime. non-durable-message-count The number of messages for all non-durable subscribers for this topic. Available at runtime. non-durable-subscription-count The number of non-durable subscribers for this topic. Available at runtime. subscription-count The number of (durable and non-durable) subscribers for this topic. Available at runtime. temporary Whether the topic is temporary. topic-address The address the topic points to. Required. 4.3. JNDI Entries and Clients A queue or topic must be bound to the java:jboss/exported namespace for a remote client to be able to look it up. The client must use the text after java:jboss/exported/ when doing the lookup. For example, a queue named testQueue has for its entries the list jms/queue/test java:jboss/exported/jms/queue/test . A remote client wanting to send messages to testQueue would look up the queue using the string jms/queue/test . A local client on the other hand could look it up using java:jboss/exported/jms/queue/test , java:jms/queue/test , or more simply jms/queue/test . Management CLI Help You can find more information about the jms-queue and jms-topic commands by using the --help --commands flags: 4.4. Pause method for Jakarta Messaging topics using the management API You can pause a topic by pausing all its consumers. If the topic has any new subscriptions being registered while the topic is in pause are also paused. The subscribers of the topic do not receive new messages from the paused topic. However, the paused topic can still receive messages sent to it. When you resume the topic, the queued messages are delivered to the subscribers. You can use the persist parameter to store the state of the topic so that the topic stays paused even if you restart the broker. Additional resources For information about pausing a topic, see Pausing a topic . For information about resuming a topic, see Resuming a topic . 4.5. Pausing a topic You can pause a topic so that all the subscribers of the topics stop receiving new messages from a paused topic. Procedure Pause the topic as shown in the following example: A paused topic is as shown in the following example: Additional resources For information about the pause method for topics, see Pause method for Jakarta Messaging topics using the management API . For information about resuming a topic, see Resuming a topic . 4.6. Resuming a topic You can resume a paused topic. When you resume the topic, the messages the topic received while it was paused are delivered to the subscribers. Procedure Resume the topic as shown in the following example: A resumed topic is as shown in the following example: Additional resources For information about the pause method for topics, see Pause method for Jakarta Messaging topics using the management API . For information about pausing a topic, see Pausing a topic .
[ "jms-queue add --queue-address=myQueue --entries=[queue/myQueue jms/queue/myQueue java:jboss/exported/jms/queue/myQueue]", "jms-queue read-resource --queue-address=myQueue", "/subsystem=messaging-activemq/server=default/jms-queue=myQueue:read-resource() { \"outcome\" => \"success\", \"result\" => { \"durable\" => true, \"entries\" => [\"queue/myQueue jms/queue/myQueue java:jboss/exported/jms/queue/myQueue\"], \"legacy-entries\" => undefined, \"selector\" => undefined } }", "/subsystem=messaging-activemq/server=default/jms-queue=*:read-resource-description()", "jms-topic add --topic-address=myTopic --entries=[topic/myTopic jms/topic/myTopic java:jboss/exported/jms/topic/myTopic]", "jms-topic read-resource --topic-address=myTopic entries topic/myTopic jms/topic/myTopic java:jboss/exported/jms/topic/myTopic legacy-entries=n/a", "/subsystem=messaging-activemq/server=default/jms-topic=myTopic:read-resource { \"outcome\" => \"success\", \"result\" => { \"entries\" => [\"topic/myTopic jms/topic/myTopic java:jboss/exported/jms/topic/myTopic\"], \"legacy-entries\" => undefined } }", "/subsystem=messaging-activemq/server=default/jms-topic=*:read-resource-description()", "jms-queue --help --commands", "jms-topic --help --commands", "/subsystem=messaging-activemq/server=default/jms-topic=topic:pause() { \"outcome\" => \"success\", \"result\" => undefined }", "/subsystem=messaging-activemq/server=default/jms-topic=topic:read-attribute(name=paused) { \"outcome\" => \"success\", \"result\" => true }", "/subsystem=messaging-activemq/server=default/jms-topic=topic:resume() { \"outcome\" => \"success\", \"result\" => undefined }", "/subsystem=messaging-activemq/server=default/jms-topic=topic:read-attribute(name=paused) { \"outcome\" => \"success\", \"result\" => false }" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/configure_destinations_artemis
33.11. Post-Installation Script
33.11. Post-Installation Script Figure 33.14. Post-Installation Script You can also add commands to execute on the system after the installation is completed. If the network is properly configured in the kickstart file, the network is enabled, and the script can include commands to access resources on the network. To include a post-installation script, type it in the text area. Important The version of anaconda in releases of Red Hat Enterprise Linux included a version of busybox that provided shell commands in the pre-installation and post-installation environments. The version of anaconda in Red Hat Enterprise Linux 6 no longer includes busybox , and uses GNU bash commands instead. Refer to Appendix G, Alternatives to busybox commands for more information. Important Do not include the %post command. It is added for you. For example, to change the message of the day for the newly installed system, add the following command to the %post section: Note More examples can be found in Section 32.8, "Kickstart Examples" . 33.11.1. Chroot Environment To run the post-installation script outside of the chroot environment, click the checkbox to this option on the top of the Post-Installation window. This is equivalent to using the --nochroot option in the %post section. To make changes to the newly installed file system, within the post-installation section, but outside of the chroot environment, you must prepend the directory name with /mnt/sysimage/ . For example, if you select Run outside of the chroot environment , the example must be changed to the following:
[ "echo \"Welcome!\" > /etc/motd", "echo \"Welcome!\" > /mnt/sysimage/etc/motd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-redhat-config-kickstart-postinstall
Appendix C. Revision History
Appendix C. Revision History Revision History Revision 0.5-0 Wed Feb 12 2020 Jaroslav Klech Provided a complete kernel version to Architectures and New Features chapters. Revision 0.4-9 Mon Oct 07 2019 Jiri Herrmann Clarified a Technology Preview note related to OVMF. Revision 0.4-8 Mon May 13 2019 Lenka Spackova Added a known issue related to freeradius upgrade (Networking). Revision 0.4-7 Sun Apr 28 2019 Lenka Spackova Improved wording of a Technology Preview feature description (File Systems). Revision 0.4-6 Mon Feb 04 2019 Lenka Spackova Improved structure of the book. Revision 0.4-5 Thu Sep 13 2018 Lenka Spackova Moved CephFS from Technology Previews to fully supported features (File Systems). Revision 0.4-4 Tue Apr 17 2018 Lenka Spackova Updated a recommendation related to the sslwrap() deprecation. Revision 0.4-3 Tue Apr 10 2018 Lenka Spackova Added a deprecation note related to the inputname option of the rsyslog imudp module. Revision 0.4-2 Thu Apr 05 2018 Lenka Spackova Moved CAT to Technology Previews (Kernel). Revision 0.4-1 Thu Mar 22 2018 Lenka Spackova Fixed the openldap-servers package name (Deprecated Functionality). Revision 0.4-0 Fri Mar 16 2018 Lenka Spackova Added a pcs-related bug fix (Clustering). Revision 0.3-9 Mon Feb 19 2018 Mirek Jahoda The incorrectly placed TPM-related features moved to the Technnology Preview section. Revision 0.3-8 Tue Feb 06 2018 Lenka Spackova Added a missing Technology Preview - OVMF (Virtualization). Added information regarding deprecation of containers using the libvirt-lxc tooling. Revision 0.3-7 Wed Jan 17 2018 Lenka Spackova Updated the FCoE deprecation notice. Revision 0.3-6 Wed Jan 10 2018 Lenka Spackova Changed the status of Device DAX for NVDIMM devices from Technology Preview to fully supported (Storage). Revision 0.3-5 Thu Dec 14 2017 Lenka Spackova Unified the structure of deprecated drivers. Revision 0.3-4 Tue Dec 12 2017 Lenka Spackova Updated deprecated adapters from the qla2xxx driver. Revision 0.3-3 Wed Nov 22 2017 Lenka Spackova Added information regarding pam_krb5 to sssd migration (Deprecated Functionality). Revision 0.3-2 Wed Nov 15 2017 Lenka Spackova Fixed a typo. Revision 0.3-1 Tue Oct 31 2017 Lenka Spackova Added an LVM-related bug fix description (Storage). Revision 0.3-3 Mon Oct 30 2017 Lenka Spackova Added an autofs bug fix description (File Systems). Added information on changes in the ld linker behavior to Deprecated Functionality. Revision 0.3-2 Wed Sep 13 2017 Lenka Spackova Added information regarding limited support for visuals in the Xorg server. Revision 0.3-1 Mon Sep 11 2017 Lenka Spackova Added CUIR enhanced scope detection to Technology Previews (Kernel). Updated openssh rebase description in New Features (Security). Revision 0.3-0 Mon Sep 04 2017 Lenka Spackova Added two known issues (Security, Desktop). Revision 0.2-9 Mon Aug 21 2017 Lenka Spackova Added tcp_wrappers to Deprecated Functionality. Revision 0.2-8 Tue Aug 15 2017 Lenka Spackova Added several new features and a known issue. Revision 0.2-7 Mon Aug 14 2017 Lenka Spackova Removed a duplicate note. Revision 0.2-6 Thu Aug 10 2017 Lenka Spackova Updated several known issues. Revision 0.2-5 Tue Aug 08 2017 Lenka Spackova Added two known issues. Revision 0.2-4 Mon Aug 07 2017 Lenka Spackova Updated FCoE deprecation notice. Minor updates and additions. Revision 0.2-3 Fri Aug 04 2017 Lenka Spackova Moved several new features from Virtualization to System and Subscription Management. Revision 0.2-2 Thu Aug 03 2017 Lenka Spackova Updated information on Btrfs ; it is now both in the Technology Previews and Deprecated Functionality parts. Minor updates and additions. Revision 0.2-1 Tue Aug 01 2017 Lenka Spackova Release of the Red Hat Enterprise Linux 7.4 Release Notes. Revision 0.0-4 Tue May 23 2017 Lenka Spackova Release of the Red Hat Enterprise Linux 7.4 Beta Release Notes.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/appe-7.4_release_notes-revision_history
Chapter 17. Backing up and restoring Red Hat Quay on a standalone deployment
Chapter 17. Backing up and restoring Red Hat Quay on a standalone deployment Use the content within this section to back up and restore Red Hat Quay in standalone deployments. 17.1. Optional: Enabling read-only mode for Red Hat Quay Enabling read-only mode for your Red Hat Quay deployment allows you to manage the registry's operations. Red Hat Quay administrators can enable read-only mode to restrict write access to the registry, which helps ensure data integrity, mitigate risks during maintenance windows, and provide a safeguard against unintended modifications to registry data. It also helps to ensure that your Red Hat Quay registry remains online and available to serve images to users. Prerequisites If you are using Red Hat Enterprise Linux (RHEL) 7.x: You have enabled the Red Hat Software Collections List (RHSCL). You have installed Python 3.6. You have downloaded the virtualenv package. You have installed the git CLI. If you are using Red Hat Enterprise Linux (RHEL) 8: You have installed Python 3 on your machine. You have downloaded the python3-virtualenv package. You have installed the git CLI. You have cloned the https://github.com/quay/quay.git repository. 17.1.1. Creating service keys for standalone Red Hat Quay Red Hat Quay uses service keys to communicate with various components. These keys are used to sign completed requests, such as requesting to scan images, login, storage access, and so on. Procedure If your Red Hat Quay registry is readily available, you can generate service keys inside of the Quay registry container. Enter the following command to generate a key pair inside of the Quay container: USD podman exec quay python3 tools/generatekeypair.py quay-readonly If your Red Hat Quay is not readily available, you must generate your service keys inside of a virtual environment. Change into the directory of your Red Hat Quay deployment and create a virtual environment inside of that directory: USD cd <USDQUAY>/quay && virtualenv -v venv Activate the virtual environment by entering the following command: USD source venv/bin/activate Optional. Install the pip CLI tool if you do not have it installed: USD venv/bin/pip install --upgrade pip In your Red Hat Quay directory, create a requirements-generatekeys.txt file with the following content: USD cat << EOF > requirements-generatekeys.txt cryptography==3.4.7 pycparser==2.19 pycryptodome==3.9.4 pycryptodomex==3.9.4 pyjwkest==1.4.2 PyJWT==1.7.1 Authlib==1.0.0a2 EOF Enter the following command to install the Python dependencies defined in the requirements-generatekeys.txt file: USD venv/bin/pip install -r requirements-generatekeys.txt Enter the following command to create the necessary service keys: USD PYTHONPATH=. venv/bin/python /<path_to_cloned_repo>/tools/generatekeypair.py quay-readonly Example output Writing public key to quay-readonly.jwk Writing key ID to quay-readonly.kid Writing private key to quay-readonly.pem Enter the following command to deactivate the virtual environment: USD deactivate 17.1.2. Adding keys to the PostgreSQL database Use the following procedure to add your service keys to the PostgreSQL database. Prerequistes You have created the service keys. Procedure Enter the following command to enter your Red Hat Quay database environment: USD podman exec -it postgresql-quay psql -U postgres -d quay Display the approval types and associated notes of the servicekeyapproval by entering the following command: quay=# select * from servicekeyapproval; Example output id | approver_id | approval_type | approved_date | notes ----+-------------+----------------------------------+----------------------------+------- 1 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:48.181347 | 2 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:55.808087 | 3 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:04.27095 | 4 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:05.46235 | 5 | 1 | ServiceKeyApprovalType.SUPERUSER | 2024-05-07 04:05:10.296796 | ... Add the service key to your Red Hat Quay database by entering the following query: quay=# INSERT INTO servicekey (name, service, metadata, kid, jwk, created_date, expiration_date) VALUES ('quay-readonly', 'quay', '{}', '{<contents_of_.kid_file>}', '{<contents_of_.jwk_file>}', '{<created_date_of_read-only>}', '{<expiration_date_of_read-only>}'); Example output INSERT 0 1 , add the key approval with the following query: quay=# INSERT INTO servicekeyapproval ('approval_type', 'approved_date', 'notes') VALUES ("ServiceKeyApprovalType.SUPERUSER", "CURRENT_DATE", {include_notes_here_on_why_this_is_being_added}); Example output INSERT 0 1 Set the approval_id field on the created service key row to the id field from the created service key approval. You can use the following SELECT statements to get the necessary IDs: UPDATE servicekey SET approval_id = (SELECT id FROM servicekeyapproval WHERE approval_type = 'ServiceKeyApprovalType.SUPERUSER') WHERE name = 'quay-readonly'; UPDATE 1 17.1.3. Configuring read-only mode for standalone Red Hat Quay After the service keys have been created and added to your PostgreSQL database, you must restart the Quay container on your standalone deployment. Prerequisites You have created the service keys and added them to your PostgreSQL database. Procedure Shutdown all Red Hat Quay instances on all virtual machines. For example: USD podman stop <quay_container_name_on_virtual_machine_a> USD podman stop <quay_container_name_on_virtual_machine_b> Enter the following command to copy the contents of the quay-readonly.kid file and the quay-readonly.pem file to the directory that holds your Red Hat Quay configuration bundle: USD cp quay-readonly.kid quay-readonly.pem USDQuay/config Enter the following command to set file permissions on all files in your configuration bundle folder: USD setfacl -m user:1001:rw USDQuay/config/* Modify your Red Hat Quay config.yaml file and add the following information: # ... REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem' # ... Distribute the new configuration bundle to all Red Hat Quay instances. Start Red Hat Quay by entering the following command: USD podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay-main-app \ -v USDQUAY/config:/conf/stack:Z \ -v USDQUAY/storage:/datastorage:Z \ {productrepo}/{quayimage}:{productminv} After starting Red Hat Quay, a banner inside in your instance informs users that Red Hat Quay is running in read-only mode. Pushes should be rejected and a 405 error should be logged. You can test this by running the following command: USD podman push <quay-server.example.com>/quayadmin/busybox:test Example output 613be09ab3c0: Preparing denied: System is currently read-only. Pulls will succeed but all write operations are currently suspended. With your Red Hat Quay deployment on read-only mode, you can safely manage your registry's operations and perform such actions as backup and restore. Optional. After you are finished with read-only mode, you can return to normal operations by removing the following information from your config.yaml file. Then, restart your Red Hat Quay deployment: # ... REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem' # ... USD podman restart <container_id> 17.1.4. Updating read-only expiration time The Red Hat Quay read-only key has an expiration date, and when that date passes the key is deactivated. Before the key expires, its expiration time can be updated in the database. To update the key, connect your Red Hat Quay production database using the methods described earlier and issue the following query: quay=# UPDATE servicekey SET expiration_date = 'new-date' WHERE id = servicekey_id; The list of service key IDs can be obtained by running the following query: SELECT id, name, expiration_date FROM servicekey; 17.2. Backing up Red Hat Quay on standalone deployments This procedure describes how to create a backup of Red Hat Quay on standalone deployments. Procedure Create a temporary backup directory, for example, quay-backup : USD mkdir /tmp/quay-backup The following example command denotes the local directory that the Red Hat Quay was started in, for example, /opt/quay-install : Change into the directory that bind-mounts to /conf/stack inside of the container, for example, /opt/quay-install , by running the following command: USD cd /opt/quay-install Compress the contents of your Red Hat Quay deployment into an archive in the quay-backup directory by entering the following command: USD tar cvf /tmp/quay-backup/quay-backup.tar.gz * Example output: config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key Back up the Quay container service by entering the following command: Redirect the contents of your conf/stack/config.yaml file to your temporary quay-config.yaml file by entering the following command: USD podman exec -it quay cat /conf/stack/config.yaml > /tmp/quay-backup/quay-config.yaml Obtain the DB_URI located in your temporary quay-config.yaml by entering the following command: USD grep DB_URI /tmp/quay-backup/quay-config.yaml Example output: Extract the PostgreSQL contents to your temporary backup directory in a backup .sql file by entering the following command: USD pg_dump -h 172.24.10.50 -p 5432 -d quay -U <username> -W -O > /tmp/quay-backup/quay-backup.sql Print the contents of your DISTRIBUTED_STORAGE_CONFIG by entering the following command: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name> s3_region: <region> Export the AWS_ACCESS_KEY_ID by using the access_key credential obtained in Step 7: USD export AWS_ACCESS_KEY_ID=<access_key> Export the AWS_SECRET_ACCESS_KEY by using the secret_key obtained in Step 7: USD export AWS_SECRET_ACCESS_KEY=<secret_key> Sync the quay bucket to the /tmp/quay-backup/blob-backup/ directory from the hostname of your DISTRIBUTED_STORAGE_CONFIG : USD aws s3 sync s3://<bucket_name> /tmp/quay-backup/blob-backup/ --source-region us-east-2 Example output: It is recommended that you delete the quay-config.yaml file after syncing the quay bucket because it contains sensitive information. The quay-config.yaml file will not be lost because it is backed up in the quay-backup.tar.gz file. 17.3. Restoring Red Hat Quay on standalone deployments This procedure describes how to restore Red Hat Quay on standalone deployments. Prerequisites You have backed up your Red Hat Quay deployment. Procedure Create a new directory that will bind-mount to /conf/stack inside of the Red Hat Quay container: USD mkdir /opt/new-quay-install Copy the contents of your temporary backup directory created in Backing up Red Hat Quay on standalone deployments to the new-quay-install1 directory created in Step 1: USD cp /tmp/quay-backup/quay-backup.tar.gz /opt/new-quay-install/ Change into the new-quay-install directory by entering the following command: USD cd /opt/new-quay-install/ Extract the contents of your Red Hat Quay directory: USD tar xvf /tmp/quay-backup/quay-backup.tar.gz * Example output: Recall the DB_URI from your backed-up config.yaml file by entering the following command: USD grep DB_URI config.yaml Example output: postgresql://<username>:[email protected]/quay Run the following command to enter the PostgreSQL database server: USD sudo postgres Enter psql and create a new database in 172.24.10.50 to restore the quay databases, for example, example_restore_registry_quay_database , by entering the following command: USD psql "host=172.24.10.50 port=5432 dbname=postgres user=<username> password=test123" postgres=> CREATE DATABASE example_restore_registry_quay_database; Example output: Connect to the database by running the following command: postgres=# \c "example-restore-registry-quay-database"; Example output: You are now connected to database "example-restore-registry-quay-database" as user "postgres". Create a pg_trmg extension of your Quay database by running the following command: example_restore_registry_quay_database=> CREATE EXTENSION IF NOT EXISTS pg_trgm; Example output: CREATE EXTENSION Exit the postgres CLI by entering the following command: \q Import the database backup to your new database by running the following command: USD psql "host=172.24.10.50 port=5432 dbname=example_restore_registry_quay_database user=<username> password=test123" -W < /tmp/quay-backup/quay-backup.sql Example output: Update the value of DB_URI in your config.yaml from postgresql://<username>:[email protected]/quay to postgresql://<username>:[email protected]/example-restore-registry-quay-database before restarting the Red Hat Quay deployment. Note The DB_URI format is DB_URI postgresql://<login_user_name>:<login_user_password>@<postgresql_host>/<quay_database> . If you are moving from one PostgreSQL server to another PostgreSQL server, update the value of <login_user_name> , <login_user_password> and <postgresql_host> at the same time. In the /opt/new-quay-install directory, print the contents of your DISTRIBUTED_STORAGE_CONFIG bundle: USD cat config.yaml | grep DISTRIBUTED_STORAGE_CONFIG -A10 Example output: DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_region: <region> s3_secret_key: <s3_secret_key> host: <host_name> Note Your DISTRIBUTED_STORAGE_CONFIG in /opt/new-quay-install must be updated before restarting your Red Hat Quay deployment. Export the AWS_ACCESS_KEY_ID by using the access_key credential obtained in Step 13: USD export AWS_ACCESS_KEY_ID=<access_key> Export the AWS_SECRET_ACCESS_KEY by using the secret_key obtained in Step 13: USD export AWS_SECRET_ACCESS_KEY=<secret_key> Create a new s3 bucket by entering the following command: USD aws s3 mb s3://<new_bucket_name> --region us-east-2 Example output: USD make_bucket: quay Upload all blobs to the new s3 bucket by entering the following command: USD aws s3 sync --no-verify-ssl \ --endpoint-url <example_endpoint_url> 1 /tmp/quay-backup/blob-backup/. s3://quay/ 1 The Red Hat Quay registry endpoint must be the same before backup and after restore. Example output: upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d to s3://quay/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 to s3://quay/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec to s3://quay/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec ... Before restarting your Red Hat Quay deployment, update the storage settings in your config.yaml: DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <new_bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> s3_region: <region> host: <host_name>
[ "podman exec quay python3 tools/generatekeypair.py quay-readonly", "cd <USDQUAY>/quay && virtualenv -v venv", "source venv/bin/activate", "venv/bin/pip install --upgrade pip", "cat << EOF > requirements-generatekeys.txt cryptography==3.4.7 pycparser==2.19 pycryptodome==3.9.4 pycryptodomex==3.9.4 pyjwkest==1.4.2 PyJWT==1.7.1 Authlib==1.0.0a2 EOF", "venv/bin/pip install -r requirements-generatekeys.txt", "PYTHONPATH=. venv/bin/python /<path_to_cloned_repo>/tools/generatekeypair.py quay-readonly", "Writing public key to quay-readonly.jwk Writing key ID to quay-readonly.kid Writing private key to quay-readonly.pem", "deactivate", "podman exec -it postgresql-quay psql -U postgres -d quay", "quay=# select * from servicekeyapproval;", "id | approver_id | approval_type | approved_date | notes ----+-------------+----------------------------------+----------------------------+------- 1 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:48.181347 | 2 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:55.808087 | 3 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:04.27095 | 4 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:05.46235 | 5 | 1 | ServiceKeyApprovalType.SUPERUSER | 2024-05-07 04:05:10.296796 |", "quay=# INSERT INTO servicekey (name, service, metadata, kid, jwk, created_date, expiration_date) VALUES ('quay-readonly', 'quay', '{}', '{<contents_of_.kid_file>}', '{<contents_of_.jwk_file>}', '{<created_date_of_read-only>}', '{<expiration_date_of_read-only>}');", "INSERT 0 1", "quay=# INSERT INTO servicekeyapproval ('approval_type', 'approved_date', 'notes') VALUES (\"ServiceKeyApprovalType.SUPERUSER\", \"CURRENT_DATE\", {include_notes_here_on_why_this_is_being_added});", "INSERT 0 1", "UPDATE servicekey SET approval_id = (SELECT id FROM servicekeyapproval WHERE approval_type = 'ServiceKeyApprovalType.SUPERUSER') WHERE name = 'quay-readonly';", "UPDATE 1", "podman stop <quay_container_name_on_virtual_machine_a>", "podman stop <quay_container_name_on_virtual_machine_b>", "cp quay-readonly.kid quay-readonly.pem USDQuay/config", "setfacl -m user:1001:rw USDQuay/config/*", "REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem'", "podman run -d --rm -p 80:8080 -p 443:8443 --name=quay-main-app -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}", "podman push <quay-server.example.com>/quayadmin/busybox:test", "613be09ab3c0: Preparing denied: System is currently read-only. Pulls will succeed but all write operations are currently suspended.", "REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem'", "podman restart <container_id>", "quay=# UPDATE servicekey SET expiration_date = 'new-date' WHERE id = servicekey_id;", "SELECT id, name, expiration_date FROM servicekey;", "mkdir /tmp/quay-backup", "podman run --name quay-app -v /opt/quay-install/config:/conf/stack:Z -v /opt/quay-install/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.12.8", "cd /opt/quay-install", "tar cvf /tmp/quay-backup/quay-backup.tar.gz *", "config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key", "podman inspect quay-app | jq -r '.[0].Config.CreateCommand | .[]' | paste -s -d ' ' - /usr/bin/podman run --name quay-app -v /opt/quay-install/config:/conf/stack:Z -v /opt/quay-install/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.12.8", "podman exec -it quay cat /conf/stack/config.yaml > /tmp/quay-backup/quay-config.yaml", "grep DB_URI /tmp/quay-backup/quay-config.yaml", "postgresql://<username>:[email protected]/quay", "pg_dump -h 172.24.10.50 -p 5432 -d quay -U <username> -W -O > /tmp/quay-backup/quay-backup.sql", "DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name> s3_region: <region>", "export AWS_ACCESS_KEY_ID=<access_key>", "export AWS_SECRET_ACCESS_KEY=<secret_key>", "aws s3 sync s3://<bucket_name> /tmp/quay-backup/blob-backup/ --source-region us-east-2", "download: s3://<user_name>/registry/sha256/9c/9c3181779a868e09698b567a3c42f3744584ddb1398efe2c4ba569a99b823f7a to registry/sha256/9c/9c3181779a868e09698b567a3c42f3744584ddb1398efe2c4ba569a99b823f7a download: s3://<user_name>/registry/sha256/e9/e9c5463f15f0fd62df3898b36ace8d15386a6813ffb470f332698ecb34af5b0d to registry/sha256/e9/e9c5463f15f0fd62df3898b36ace8d15386a6813ffb470f332698ecb34af5b0d", "mkdir /opt/new-quay-install", "cp /tmp/quay-backup/quay-backup.tar.gz /opt/new-quay-install/", "cd /opt/new-quay-install/", "tar xvf /tmp/quay-backup/quay-backup.tar.gz *", "config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key", "grep DB_URI config.yaml", "postgresql://<username>:[email protected]/quay", "sudo postgres", "psql \"host=172.24.10.50 port=5432 dbname=postgres user=<username> password=test123\" postgres=> CREATE DATABASE example_restore_registry_quay_database;", "CREATE DATABASE", "postgres=# \\c \"example-restore-registry-quay-database\";", "You are now connected to database \"example-restore-registry-quay-database\" as user \"postgres\".", "example_restore_registry_quay_database=> CREATE EXTENSION IF NOT EXISTS pg_trgm;", "CREATE EXTENSION", "\\q", "psql \"host=172.24.10.50 port=5432 dbname=example_restore_registry_quay_database user=<username> password=test123\" -W < /tmp/quay-backup/quay-backup.sql", "SET SET SET SET SET", "cat config.yaml | grep DISTRIBUTED_STORAGE_CONFIG -A10", "DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_region: <region> s3_secret_key: <s3_secret_key> host: <host_name>", "export AWS_ACCESS_KEY_ID=<access_key>", "export AWS_SECRET_ACCESS_KEY=<secret_key>", "aws s3 mb s3://<new_bucket_name> --region us-east-2", "make_bucket: quay", "aws s3 sync --no-verify-ssl --endpoint-url <example_endpoint_url> 1 /tmp/quay-backup/blob-backup/. s3://quay/", "upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d to s3://quay/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 to s3://quay/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec to s3://quay/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec", "DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <new_bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> s3_region: <region> host: <host_name>" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/standalone-deployment-backup-restore
Chapter 1. Deployment overview
Chapter 1. Deployment overview Streams for Apache Kafka simplifies the process of running Apache Kafka in an OpenShift cluster. This guide provides instructions for deploying and managing Streams for Apache Kafka. Deployment options and steps are covered using the example installation files included with Streams for Apache Kafka. While the guide highlights important configuration considerations, it does not cover all available options. For a deeper understanding of the Kafka component configuration options, refer to the Streams for Apache Kafka Custom Resource API Reference . In addition to deployment instructions, the guide offers pre- and post-deployment guidance. It covers setting up and securing client access to your Kafka cluster. Furthermore, it explores additional deployment options such as metrics integration, distributed tracing, and cluster management tools like Cruise Control and the Streams for Apache Kafka Drain Cleaner. You'll also find recommendations on managing Streams for Apache Kafka and fine-tuning Kafka configuration for optimal performance. Upgrade instructions are provided for both Streams for Apache Kafka and Kafka, to help keep your deployment up to date. Streams for Apache Kafka is designed to be compatible with all types of OpenShift clusters, irrespective of their distribution. Whether your deployment involves public or private clouds, or if you are setting up a local development environment, the instructions in this guide are applicable in all cases. 1.1. Streams for Apache Kafka custom resources The deployment of Kafka components onto an OpenShift cluster using Streams for Apache Kafka is highly configurable through the use of custom resources. These resources are created as instances of APIs introduced by Custom Resource Definitions (CRDs), which extend OpenShift resources. CRDs act as configuration instructions to describe the custom resources in an OpenShift cluster, and are provided with Streams for Apache Kafka for each Kafka component used in a deployment, as well as users and topics. CRDs and custom resources are defined as YAML files. Example YAML files are provided with the Streams for Apache Kafka distribution. CRDs also allow Streams for Apache Kafka resources to benefit from native OpenShift features like CLI accessibility and configuration validation. 1.1.1. Streams for Apache Kafka custom resource example CRDs require a one-time installation in a cluster to define the schemas used to instantiate and manage Streams for Apache Kafka-specific resources. After a new custom resource type is added to your cluster by installing a CRD, you can create instances of the resource based on its specification. Depending on the cluster setup, installation typically requires cluster admin privileges. Note Access to manage custom resources is limited to Streams for Apache Kafka administrators. For more information, see Section 4.6, "Designating Streams for Apache Kafka administrators" . A CRD defines a new kind of resource, such as kind:Kafka , within an OpenShift cluster. The Kubernetes API server allows custom resources to be created based on the kind and understands from the CRD how to validate and store the custom resource when it is added to the OpenShift cluster. Each Streams for Apache Kafka-specific custom resource conforms to the schema defined by the CRD for the resource's kind . The custom resources for Streams for Apache Kafka components have common configuration properties, which are defined under spec . To understand the relationship between a CRD and a custom resource, let's look at a sample of the CRD for a Kafka topic. Kafka topic CRD apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta2 scope: Namespaced names: # ... singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # ... subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 # ... 1 The metadata for the topic CRD, its name and a label to identify the CRD. 2 The specification for this CRD, including the group (domain) name, the plural name and the supported schema version, which are used in the URL to access the API of the topic. The other names are used to identify instance resources in the CLI. For example, oc get kafkatopic my-topic or oc get kafkatopics . 3 The shortname can be used in CLI commands. For example, oc get kt can be used as an abbreviation instead of oc get kafkatopic . 4 The information presented when using a get command on the custom resource. 5 The current status of the CRD as described in the schema reference for the resource. 6 openAPIV3Schema validation provides validation for the creation of topic custom resources. For example, a topic requires at least one partition and one replica. Note You can identify the CRD YAML files supplied with the Streams for Apache Kafka installation files, because the file names contain an index number followed by 'Crd'. Here is a corresponding example of a KafkaTopic custom resource. Kafka topic custom resource apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: "2019-08-20T11:37:00.706Z" status: "True" type: Ready observedGeneration: 1 / ... 1 The kind and apiVersion identify the CRD of which the custom resource is an instance. 2 A label, applicable only to KafkaTopic and KafkaUser resources, that defines the name of the Kafka cluster (which is same as the name of the Kafka resource) to which a topic or user belongs. 3 The spec shows the number of partitions and replicas for the topic as well as the configuration parameters for the topic itself. In this example, the retention period for a message to remain in the topic and the segment file size for the log are specified. 4 Status conditions for the KafkaTopic resource. The type condition changed to Ready at the lastTransitionTime . Custom resources can be applied to a cluster through the platform CLI. When the custom resource is created, it uses the same validation as the built-in resources of the Kubernetes API. After a KafkaTopic custom resource is created, the Topic Operator is notified and corresponding Kafka topics are created in Streams for Apache Kafka. Additional resources Extend the Kubernetes API with CustomResourceDefinitions Example configuration files provided with Streams for Apache Kafka 1.1.2. Performing oc operations on custom resources You can use oc commands to retrieve information and perform other operations on Streams for Apache Kafka custom resources. Use oc commands, such as get , describe , edit , or delete , to perform operations on resource types. For example, oc get kafkatopics retrieves a list of all Kafka topics and oc get kafkas retrieves all deployed Kafka clusters. When referencing resource types, you can use both singular and plural names: oc get kafkas gets the same results as oc get kafka . You can also use the short name of the resource. Learning short names can save you time when managing Streams for Apache Kafka. The short name for Kafka is k , so you can also run oc get k to list all Kafka clusters. Listing Kafka clusters oc get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3 Table 1.1. Long and short names for each Streams for Apache Kafka resource Streams for Apache Kafka resource Long name Short name Kafka kafka k Kafka Node Pool kafkanodepool knp Kafka Topic kafkatopic kt Kafka User kafkauser ku Kafka Connect kafkaconnect kc Kafka Connector kafkaconnector kctr Kafka Mirror Maker kafkamirrormaker kmm Kafka Mirror Maker 2 kafkamirrormaker2 kmm2 Kafka Bridge kafkabridge kb Kafka Rebalance kafkarebalance kr 1.1.2.1. Resource categories Categories of custom resources can also be used in oc commands. All Streams for Apache Kafka custom resources belong to the category strimzi , so you can use strimzi to get all the Streams for Apache Kafka resources with one command. For example, running oc get strimzi lists all Streams for Apache Kafka custom resources in a given namespace. Listing all custom resources oc get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple The oc get strimzi -o name command returns all resource types and resource names. The -o name option fetches the output in the type/name format Listing all resource types and names oc get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user You can combine this strimzi command with other commands. For example, you can pass it into a oc delete command to delete all resources in a single command. Deleting all custom resources oc delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io "my-cluster" deleted kafkatopic.kafka.strimzi.io "kafka-apps" deleted kafkauser.kafka.strimzi.io "my-user" deleted Deleting all resources in a single operation might be useful, for example, when you are testing new Streams for Apache Kafka features. 1.1.2.2. Querying the status of sub-resources There are other values you can pass to the -o option. For example, by using -o yaml you get the output in YAML format. Using -o json will return it as JSON. You can see all the options in oc get --help . One of the most useful options is the JSONPath support , which allows you to pass JSONPath expressions to query the Kubernetes API. A JSONPath expression can extract or navigate specific parts of any resource. For example, you can use the JSONPath expression {.status.listeners[?(@.name=="tls")].bootstrapServers} to get the bootstrap address from the status of the Kafka custom resource and use it in your Kafka clients. Here, the command retrieves the bootstrapServers value of the listener named tls : Retrieving the bootstrap address oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="tls")].bootstrapServers}{"\n"}' my-cluster-kafka-bootstrap.myproject.svc:9093 By changing the name condition you can also get the address of the other Kafka listeners. You can use jsonpath to extract any other property or group of properties from any custom resource. 1.1.3. Streams for Apache Kafka custom resource status information Status properties provide status information for certain custom resources. The following table lists the custom resources that provide status information (when deployed) and the schemas that define the status properties. For more information on the schemas, see the Streams for Apache Kafka Custom Resource API Reference . Table 1.2. Custom resources that provide status information Streams for Apache Kafka resource Schema reference Publishes status information on... Kafka KafkaStatus schema reference The Kafka cluster, its listeners, and node pools KafkaNodePool KafkaNodePoolStatus schema reference The nodes in the node pool, their roles, and the associated Kafka cluster KafkaTopic KafkaTopicStatus schema reference Kafka topics in the Kafka cluster KafkaUser KafkaUserStatus schema reference Kafka users in the Kafka cluster KafkaConnect KafkaConnectStatus schema reference The Kafka Connect cluster and connector plugins KafkaConnector KafkaConnectorStatus schema reference KafkaConnector resources KafkaMirrorMaker2 KafkaMirrorMaker2Status schema reference The Kafka MirrorMaker 2 cluster and internal connectors KafkaMirrorMaker KafkaMirrorMakerStatus schema reference The Kafka MirrorMaker cluster KafkaBridge KafkaBridgeStatus schema reference The Streams for Apache Kafka Bridge KafkaRebalance KafkaRebalance schema reference The status and results of a rebalance StrimziPodSet StrimziPodSetStatus schema reference The number of pods: being managed, using the current version, and in a ready state The status property of a resource provides information on the state of the resource. The status.conditions and status.observedGeneration properties are common to all resources. status.conditions Status conditions describe the current state of a resource. Status condition properties are useful for tracking progress related to the resource achieving its desired state , as defined by the configuration specified in its spec . Status condition properties provide the time and reason the state of the resource changed, and details of events preventing or delaying the operator from realizing the desired state. status.observedGeneration Last observed generation denotes the latest reconciliation of the resource by the Cluster Operator. If the value of observedGeneration is different from the value of metadata.generation (the current version of the deployment), the operator has not yet processed the latest update to the resource. If these values are the same, the status information reflects the most recent changes to the resource. The status properties also provide resource-specific information. For example, KafkaStatus provides information on listener addresses, and the ID of the Kafka cluster. KafkaStatus also provides information on the Kafka and Streams for Apache Kafka versions being used. You can check the values of operatorLastSuccessfulVersion and kafkaVersion to determine whether an upgrade of Streams for Apache Kafka or Kafka has completed Streams for Apache Kafka creates and maintains the status of custom resources, periodically evaluating the current state of the custom resource and updating its status accordingly. When performing an update on a custom resource using oc edit , for example, its status is not editable. Moreover, changing the status would not affect the configuration of the Kafka cluster. Here we see the status properties for a Kafka custom resource. Kafka custom resource status apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # ... status: clusterId: XP9FP2P-RByvEy0W4cOEUA 1 conditions: 2 - lastTransitionTime: '2023-01-20T17:56:29.396588Z' status: 'True' type: Ready 3 kafkaMetadataState: KRaft 4 kafkaVersion: 3.7.0 5 kafkaNodePools: 6 - name: broker - name: controller listeners: 7 - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9092 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9092' name: plain - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9093 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9093' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: tls - addresses: - host: >- 2054284155.us-east-2.elb.amazonaws.com port: 9095 bootstrapServers: >- 2054284155.us-east-2.elb.amazonaws.com:9095 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external3 - addresses: - host: ip-10-0-172-202.us-east-2.compute.internal port: 31644 bootstrapServers: 'ip-10-0-172-202.us-east-2.compute.internal:31644' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external4 observedGeneration: 3 8 operatorLastSuccessfulVersion: 2.7 9 1 The Kafka cluster ID. 2 Status conditions describe the current state of the Kafka cluster. 3 The Ready condition indicates that the Cluster Operator considers the Kafka cluster able to handle traffic. 4 Kafka metadata state that shows the mechanism used (KRaft or ZooKeeper) to manage Kafka metadata and coordinate operations. 5 The version of Kafka being used by the Kafka cluster. 6 The node pools belonging to the Kafka cluster. 7 The listeners describe Kafka bootstrap addresses by type. 8 The observedGeneration value indicates the last reconciliation of the Kafka custom resource by the Cluster Operator. 9 The version of the operator that successfully completed the last reconciliation. Note The Kafka bootstrap addresses listed in the status do not signify that those endpoints or the Kafka cluster is in a Ready state. 1.1.4. Finding the status of a custom resource Use oc with the status subresource of a custom resource to retrieve information about the resource. Prerequisites An OpenShift cluster. The Cluster Operator is running. Procedure Specify the custom resource and use the -o jsonpath option to apply a standard JSONPath expression to select the status property: oc get kafka <kafka_resource_name> -o jsonpath='{.status}' | jq This expression returns all the status information for the specified custom resource. You can use dot notation, such as status.listeners or status.observedGeneration , to fine-tune the status information you wish to see. Using the jq command line JSON parser tool makes it easier to read the output. Additional resources For more information about using JSONPath, see JSONPath support . 1.2. Streams for Apache Kafka operators Streams for Apache Kafka operators are purpose-built with specialist operational knowledge to effectively manage Kafka on OpenShift. Each operator performs a distinct function. Cluster Operator The Cluster Operator handles the deployment and management of Apache Kafka clusters on OpenShift. It automates the setup of Kafka brokers, and other Kafka components and resources. Topic Operator The Topic Operator manages the creation, configuration, and deletion of topics within Kafka clusters. User Operator The User Operator manages Kafka users that require access to Kafka brokers. When you deploy Streams for Apache Kafka, you first deploy the Cluster Operator. The Cluster Operator is then ready to handle the deployment of Kafka. You can also deploy the Topic Operator and User Operator using the Cluster Operator (recommended) or as standalone operators. You would use a standalone operator with a Kafka cluster that is not managed by the Cluster Operator. The Topic Operator and User Operator are part of the Entity Operator. The Cluster Operator can deploy one or both operators based on the Entity Operator configuration. Important To deploy the standalone operators, you need to set environment variables to connect to a Kafka cluster. These environment variables do not need to be set if you are deploying the operators using the Cluster Operator as they will be set by the Cluster Operator. 1.2.1. Watching Streams for Apache Kafka resources in OpenShift namespaces Operators watch and manage Streams for Apache Kafka resources in OpenShift namespaces. The Cluster Operator can watch a single namespace, multiple namespaces, or all namespaces in an OpenShift cluster. The Topic Operator and User Operator can watch a single namespace. The Cluster Operator watches for Kafka resources The Topic Operator watches for KafkaTopic resources The User Operator watches for KafkaUser resources The Topic Operator and the User Operator can only watch a single Kafka cluster in a namespace. And they can only be connected to a single Kafka cluster. If multiple Topic Operators watch the same namespace, name collisions and topic deletion can occur. This is because each Kafka cluster uses Kafka topics that have the same name (such as __consumer_offsets ). Make sure that only one Topic Operator watches a given namespace. When using multiple User Operators with a single namespace, a user with a given username can exist in more than one Kafka cluster. If you deploy the Topic Operator and User Operator using the Cluster Operator, they watch the Kafka cluster deployed by the Cluster Operator by default. You can also specify a namespace using watchedNamespace in the operator configuration. For a standalone deployment of each operator, you specify a namespace and connection to the Kafka cluster to watch in the configuration. 1.2.2. Managing RBAC resources The Cluster Operator creates and manages role-based access control (RBAC) resources for Streams for Apache Kafka components that need access to OpenShift resources. For the Cluster Operator to function, it needs permission within the OpenShift cluster to interact with Kafka resources, such as Kafka and KafkaConnect , as well as managed resources like ConfigMap , Pod , Deployment , and Service . Permission is specified through the following OpenShift RBAC resources: ServiceAccount Role and ClusterRole RoleBinding and ClusterRoleBinding 1.2.2.1. Delegating privileges to Streams for Apache Kafka components The Cluster Operator runs under a service account called strimzi-cluster-operator . It is assigned cluster roles that give it permission to create the RBAC resources for Streams for Apache Kafka components. Role bindings associate the cluster roles with the service account. OpenShift prevents components operating under one ServiceAccount from granting another ServiceAccount privileges that the granting ServiceAccount does not have. Because the Cluster Operator creates the RoleBinding and ClusterRoleBinding RBAC resources needed by the resources it manages, it requires a role that gives it the same privileges. The following sections describe the RBAC resources required by the Cluster Operator. 1.2.2.2. ClusterRole resources The Cluster Operator uses ClusterRole resources to provide the necessary access to resources. Depending on the OpenShift cluster setup, a cluster administrator might be needed to create the cluster roles. Note Cluster administrator rights are only needed for the creation of ClusterRole resources. The Cluster Operator will not run under a cluster admin account. The RBAC resources follow the principle of least privilege and contain only those privileges needed by the Cluster Operator to operate the cluster of the Kafka component. All cluster roles are required by the Cluster Operator in order to delegate privileges. Table 1.3. ClusterRole resources Name Description strimzi-cluster-operator-namespaced Access rights for namespace-scoped resources used by the Cluster Operator to deploy and manage the operands. strimzi-cluster-operator-global Access rights for cluster-scoped resources used by the Cluster Operator to deploy and manage the operands. strimzi-cluster-operator-leader-election Access rights used by the Cluster Operator for leader election. strimzi-cluster-operator-watched Access rights used by the Cluster Operator to watch and manage the Streams for Apache Kafka custom resources. strimzi-kafka-broker Access rights to allow Kafka brokers to get the topology labels from OpenShift worker nodes when rack-awareness is used. strimzi-entity-operator Access rights used by the Topic and User Operators to manage Kafka users and topics. strimzi-kafka-client Access rights to allow Kafka Connect, MirrorMaker (1 and 2), and Kafka Bridge to get the topology labels from OpenShift worker nodes when rack-awareness is used. 1.2.2.3. ClusterRoleBinding resources The Cluster Operator uses ClusterRoleBinding and RoleBinding resources to associate its ClusterRole with its ServiceAccount . Cluster role bindings are required by cluster roles containing cluster-scoped resources. Table 1.4. ClusterRoleBinding resources Name Description strimzi-cluster-operator Grants the Cluster Operator the rights from the strimzi-cluster-operator-global cluster role. strimzi-cluster-operator-kafka-broker-delegation Grants the Cluster Operator the rights from the strimzi-entity-operator cluster role. strimzi-cluster-operator-kafka-client-delegation Grants the Cluster Operator the rights from the strimzi-kafka-client cluster role. Table 1.5. RoleBinding resources Name Description strimzi-cluster-operator Grants the Cluster Operator the rights from the strimzi-cluster-operator-namespaced cluster role. strimzi-cluster-operator-leader-election Grants the Cluster Operator the rights from the strimzi-cluster-operator-leader-election cluster role. strimzi-cluster-operator-watched Grants the Cluster Operator the rights from the strimzi-cluster-operator-watched cluster role. strimzi-cluster-operator-entity-operator-delegation Grants the Cluster Operator the rights from the strimzi-cluster-operator-entity-operator-delegation cluster role. 1.2.2.4. ServiceAccount resources The Cluster Operator runs using the strimzi-cluster-operator ServiceAccount . This service account grants it the privileges it requires to manage the operands. The Cluster Operator creates additional ClusterRoleBinding and RoleBinding resources to delegate some of these RBAC rights to the operands. Each of the operands uses its own service account created by the Cluster Operator. This allows the Cluster Operator to follow the principle of least privilege and give the operands only the access rights that are really need. Table 1.6. ServiceAccount resources Name Used by <cluster_name>-zookeeper ZooKeeper pods <cluster_name>-kafka Kafka broker pods <cluster_name>-entity-operator Entity Operator <cluster_name>-cruise-control Cruise Control pods <cluster_name>-kafka-exporter Kafka Exporter pods <cluster_name>-connect Kafka Connect pods <cluster_name>-mirror-maker MirrorMaker pods <cluster_name>-mirrormaker2 MirrorMaker 2 pods <cluster_name>-bridge Kafka Bridge pods 1.2.3. Managing pod resources The StrimziPodSet custom resource is used by Streams for Apache Kafka to create and manage Kafka, Kafka Connect, and MirrorMaker 2 pods. If you are using ZooKeeper, ZooKeeper pods are also created and managed using StrimziPodSet resources. You must not create, update, or delete StrimziPodSet resources. The StrimziPodSet custom resource is used internally and resources are managed solely by the Cluster Operator. As a consequence, the Cluster Operator must be running properly to avoid the possibility of pods not starting and Kafka clusters not being available. Note OpenShift Deployment resources are used for creating and managing the pods of other components: Kafka Bridge, Kafka Exporter, Cruise Control, (deprecated) MirrorMaker 1, User Operator and Topic Operator. 1.3. Using the Kafka Bridge to connect with a Kafka cluster You can use the Streams for Apache Kafka Bridge API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol. When you set up the Kafka Bridge you configure HTTP access to the Kafka cluster. You can then use the Kafka Bridge to produce and consume messages from the cluster, as well as performing other operations through its REST interface. Additional resources For information on installing and using the Kafka Bridge, see Using the Streams for Apache Kafka Bridge . 1.4. Seamless FIPS support Federal Information Processing Standards (FIPS) are standards for computer security and interoperability. When running Streams for Apache Kafka on a FIPS-enabled OpenShift cluster, the OpenJDK used in Streams for Apache Kafka container images automatically switches to FIPS mode. From version 2.3, Streams for Apache Kafka can run on FIPS-enabled OpenShift clusters without any changes or special configuration. It uses only the FIPS-compliant security libraries from the OpenJDK. Important If you are using FIPS-enabled OpenShift clusters, you may experience higher memory consumption compared to regular OpenShift clusters. To avoid any issues, we suggest increasing the memory request to at least 512Mi. For more information about the NIST validation program and validated modules, see Cryptographic Module Validation Program on the NIST website. Note Compatibility with the technology previews of Streams for Apache Kafka Proxy and Streams for Apache Kafka Console has not been tested regarding FIPS support. While they are expected to function properly, we cannot guarantee full support at this time. 1.4.1. Minimum password length When running in the FIPS mode, SCRAM-SHA-512 passwords need to be at least 32 characters long. From Streams for Apache Kafka 2.3, the default password length in Streams for Apache Kafka User Operator is set to 32 characters as well. If you have a Kafka cluster with custom configuration that uses a password length that is less than 32 characters, you need to update your configuration. If you have any users with passwords shorter than 32 characters, you need to regenerate a password with the required length. You can do that, for example, by deleting the user secret and waiting for the User Operator to create a new password with the appropriate length. Additional resources Disabling FIPS mode using Cluster Operator configuration What are Federal Information Processing Standards (FIPS) 1.5. Document Conventions User-replaced values User-replaced values, also known as replaceables , are shown in with angle brackets (< >). Underscores ( _ ) are used for multi-word values. If the value refers to code or commands, monospace is also used. For example, the following code shows that <my_namespace> must be replaced by the correct namespace name: 1.6. Additional resources Streams for Apache Kafka Overview Streams for Apache Kafka Custom Resource API Reference Using the Streams for Apache Kafka Bridge
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta2 scope: Namespaced names: # singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: \"2019-08-20T11:37:00.706Z\" status: \"True\" type: Ready observedGeneration: 1 /", "get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3", "get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple", "get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user", "delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io \"my-cluster\" deleted kafkatopic.kafka.strimzi.io \"kafka-apps\" deleted kafkauser.kafka.strimzi.io \"my-user\" deleted", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"tls\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-bootstrap.myproject.svc:9093", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # status: clusterId: XP9FP2P-RByvEy0W4cOEUA 1 conditions: 2 - lastTransitionTime: '2023-01-20T17:56:29.396588Z' status: 'True' type: Ready 3 kafkaMetadataState: KRaft 4 kafkaVersion: 3.7.0 5 kafkaNodePools: 6 - name: broker - name: controller listeners: 7 - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9092 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9092' name: plain - addresses: - host: my-cluster-kafka-bootstrap.prm-project.svc port: 9093 bootstrapServers: 'my-cluster-kafka-bootstrap.prm-project.svc:9093' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: tls - addresses: - host: >- 2054284155.us-east-2.elb.amazonaws.com port: 9095 bootstrapServers: >- 2054284155.us-east-2.elb.amazonaws.com:9095 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external3 - addresses: - host: ip-10-0-172-202.us-east-2.compute.internal port: 31644 bootstrapServers: 'ip-10-0-172-202.us-east-2.compute.internal:31644' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external4 observedGeneration: 3 8 operatorLastSuccessfulVersion: 2.7 9", "get kafka <kafka_resource_name> -o jsonpath='{.status}' | jq", "sed -i 's/namespace: .*/namespace: <my_namespace>/' install/cluster-operator/*RoleBinding*.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/deploy-intro_str
Networking
Networking OpenShift Container Platform 4.13 Configuring and managing cluster networking Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Add a reporter name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/developing_kafka_client_applications/proc-providing-feedback-on-redhat-documentation
Chapter 22. ProjectHelmChartRepository [helm.openshift.io/v1beta1]
Chapter 22. ProjectHelmChartRepository [helm.openshift.io/v1beta1] Description ProjectHelmChartRepository holds namespace-wide configuration for proxied Helm chart repository Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 22.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object Observed status of the repository within the namespace.. 22.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description connectionConfig object Required configuration for connecting to the chart repo description string Optional human readable repository description, it can be used by UI for displaying purposes disabled boolean If set to true, disable the repo usage in the namespace name string Optional associated human readable repository name, it can be used by UI for displaying purposes 22.1.2. .spec.connectionConfig Description Required configuration for connecting to the chart repo Type object Property Type Description basicAuthConfig object basicAuthConfig is an optional reference to a secret by name that contains the basic authentication credentials to present when connecting to the server. The key "username" is used locate the username. The key "password" is used to locate the password. The namespace for this secret must be same as the namespace where the project helm chart repository is getting instantiated. ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this configmap must be same as the namespace where the project helm chart repository is getting instantiated. tlsClientConfig object tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret must be same as the namespace where the project helm chart repository is getting instantiated. url string Chart repository URL 22.1.3. .spec.connectionConfig.basicAuthConfig Description basicAuthConfig is an optional reference to a secret by name that contains the basic authentication credentials to present when connecting to the server. The key "username" is used locate the username. The key "password" is used to locate the password. The namespace for this secret must be same as the namespace where the project helm chart repository is getting instantiated. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 22.1.4. .spec.connectionConfig.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this configmap must be same as the namespace where the project helm chart repository is getting instantiated. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 22.1.5. .spec.connectionConfig.tlsClientConfig Description tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret must be same as the namespace where the project helm chart repository is getting instantiated. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 22.1.6. .status Description Observed status of the repository within the namespace.. Type object Property Type Description conditions array conditions is a list of conditions and their statuses conditions[] object Condition contains details for one aspect of the current state of this API Resource. 22.1.7. .status.conditions Description conditions is a list of conditions and their statuses Type array 22.1.8. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 22.2. API endpoints The following API endpoints are available: /apis/helm.openshift.io/v1beta1/projecthelmchartrepositories GET : list objects of kind ProjectHelmChartRepository /apis/helm.openshift.io/v1beta1/namespaces/{namespace}/projecthelmchartrepositories DELETE : delete collection of ProjectHelmChartRepository GET : list objects of kind ProjectHelmChartRepository POST : create a ProjectHelmChartRepository /apis/helm.openshift.io/v1beta1/namespaces/{namespace}/projecthelmchartrepositories/{name} DELETE : delete a ProjectHelmChartRepository GET : read the specified ProjectHelmChartRepository PATCH : partially update the specified ProjectHelmChartRepository PUT : replace the specified ProjectHelmChartRepository /apis/helm.openshift.io/v1beta1/namespaces/{namespace}/projecthelmchartrepositories/{name}/status GET : read status of the specified ProjectHelmChartRepository PATCH : partially update status of the specified ProjectHelmChartRepository PUT : replace status of the specified ProjectHelmChartRepository 22.2.1. /apis/helm.openshift.io/v1beta1/projecthelmchartrepositories HTTP method GET Description list objects of kind ProjectHelmChartRepository Table 22.1. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepositoryList schema 401 - Unauthorized Empty 22.2.2. /apis/helm.openshift.io/v1beta1/namespaces/{namespace}/projecthelmchartrepositories HTTP method DELETE Description delete collection of ProjectHelmChartRepository Table 22.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ProjectHelmChartRepository Table 22.3. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepositoryList schema 401 - Unauthorized Empty HTTP method POST Description create a ProjectHelmChartRepository Table 22.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.5. Body parameters Parameter Type Description body ProjectHelmChartRepository schema Table 22.6. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 201 - Created ProjectHelmChartRepository schema 202 - Accepted ProjectHelmChartRepository schema 401 - Unauthorized Empty 22.2.3. /apis/helm.openshift.io/v1beta1/namespaces/{namespace}/projecthelmchartrepositories/{name} Table 22.7. Global path parameters Parameter Type Description name string name of the ProjectHelmChartRepository HTTP method DELETE Description delete a ProjectHelmChartRepository Table 22.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 22.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ProjectHelmChartRepository Table 22.10. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ProjectHelmChartRepository Table 22.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.12. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ProjectHelmChartRepository Table 22.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.14. Body parameters Parameter Type Description body ProjectHelmChartRepository schema Table 22.15. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 201 - Created ProjectHelmChartRepository schema 401 - Unauthorized Empty 22.2.4. /apis/helm.openshift.io/v1beta1/namespaces/{namespace}/projecthelmchartrepositories/{name}/status Table 22.16. Global path parameters Parameter Type Description name string name of the ProjectHelmChartRepository HTTP method GET Description read status of the specified ProjectHelmChartRepository Table 22.17. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ProjectHelmChartRepository Table 22.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.19. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ProjectHelmChartRepository Table 22.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.21. Body parameters Parameter Type Description body ProjectHelmChartRepository schema Table 22.22. HTTP responses HTTP code Reponse body 200 - OK ProjectHelmChartRepository schema 201 - Created ProjectHelmChartRepository schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/config_apis/projecthelmchartrepository-helm-openshift-io-v1beta1
Appendix A. Primitive types
Appendix A. Primitive types This section describes the primitive data types supported by the API. A.1. String primitive A finite sequence of Unicode characters. A.2. Boolean primitive Represents the false and true concepts used in mathematical logic. The valid values are the strings false and true . Case is ignored by the engine, so for example False and FALSE also valid values. However the server will always return lower case values. For backwards compatibility with older versions of the engine, the values 0 and 1 are also accepted. The value 0 has the same meaning than false , and 1 has the same meaning than true . Try to avoid using these values, as support for them may be removed in the future. A.3. Integer primitive Represents the mathematical concept of integer number. The valid values are finite sequences of decimal digits. Currently the engine implements this type using a signed 32 bit integer, so the minimum value is -2 31 (-2147483648) and the maximum value is 2 31 -1 (2147483647). However, there are some attributes in the system where the range of values possible with 32 bit isn't enough. In those exceptional cases the engine uses 64 bit integers, in particular for the following attributes: Disk.actual_size Disk.provisioned_size GlusterClient.bytes_read GlusterClient.bytes_written Host.max_scheduling_memory Host.memory HostNic.speed LogicalUnit.size MemoryPolicy.guaranteed NumaNode.memory QuotaStorageLimit.limit StorageDomain.available StorageDomain.used StorageDomain.committed VmBase.memory For these exception cases the minimum value is -2 63 (-9223372036854775808) and the maximum value is 2 63 -1 (9223372036854775807). Note In the future the integer type will be implemented using unlimited precission integers, so the above limitations and exceptions will eventually disappear. A.4. Decimal primitive Represents the mathematical concept of real number. Currently the engine implements this type using 32-bit IEEE 754 single precision floating point numbers. For some attributes this isn't enough precision. In those exceptional cases the engine uses 64 bit double precision floating point numbers, in particular for the following attributes: QuotaStorageLimit.usage QuotaStorageLimit.memory_limit QuotaStorageLimit.memory_usage Note In the future the decimal type will be implemented using unlimited precision decimal numbers, so the above limitations and exceptions will eventually disappear. A.5. Date primitive Represents a date and time. The format returned by the engine is the one described in the XML Schema specification when requesting XML. For example, if you send a request like this to retrieve the XML representation of a virtual machine: The response body will contain the following XML document: <vm id="123" href="/ovirt-engine/api/vms/123"> ... <creation_time>2016-09-08T09:53:35.138+02:00</creation_time> ... </vm> When requesting the JSON representation the engine uses a different, format: an integer containing the number of milliseconds since Jan 1 st 1970, also known as epoch time . For example, if you send a request like this to retrieve the JSON representation of a virtual machine: GET /ovirt-engine/api/vms/123 Accept: application/json The response body will contain the following JSON document: { "id": "123", "href="/ovirt-engine/api/vms/123", ... "creation_time": 1472564909990, ... } Note In both cases, the dates returned by the engine use the time zone configured in the server where it is running. In these examples, the time zone is UTC+2.
[ "GET /ovirt-engine/api/vms/123 Accept: application/xml", "<vm id=\"123\" href=\"/ovirt-engine/api/vms/123\"> <creation_time>2016-09-08T09:53:35.138+02:00</creation_time> </vm>", "GET /ovirt-engine/api/vms/123 Accept: application/json", "{ \"id\": \"123\", \"href=\"/ovirt-engine/api/vms/123\", \"creation_time\": 1472564909990, }" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/rest_api_guide/primitive-types
Chapter 3. Getting started with Kaoto
Chapter 3. Getting started with Kaoto The following procedure explains how to create and store integrations with Kaoto. Open VS Code and select "Open Folder". In the folder selection dialog select the folder to store your integrations or create a new folder and select it. Open Command Palette ( Ctrl+Shift+P ), paste the following command, and press enter. Create a Camel route using YAML DSL Provide a name for the new file without extension and press enter. The file is created with extension camel.yaml . (Here: demo.camel.yaml ). Select the log step to configure the Message in the configuration panel. Using the icons below the route image, you can choose between a vertical/horizontal layout of the route, zoom in/out, etc. Open Catalog on the far right displays the Camel catalog. There are also several filtering options in the Catalog, which greatly simplifies finding what you need. To add a component to the Camel route, click on the dot pattern of an existing component or invoke the right-click context menu on the step and select "Append" . The camel component Catalog will be displayed, click on the additional component and it will be added to the route. To remove the added component from the Camel route, click on the dot pattern of the existing component or invoke the right-click context menu on the step and select "Delete" . The existing component will be removed from the route. 3.1. Running the Camel Route To get started with simple routes, they can be launched with Camel JBang. With demo.camel.yaml open, click the button Run Camel Application with JBang in the editor quick action menu at the top right of the editor. The terminal will open with the running Camel route. It can take several seconds the first time for Camel JBang to initialize.
[ "Create a Camel route using YAML DSL" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/kaoto/getting-started-with-kaoto
Chapter 5. Setting up your development environment
Chapter 5. Setting up your development environment You can follow the procedures in this section to set up your development environment to create automation execution environments. 5.1. Installing Ansible Builder You can install Ansible Builder using Red Hat Subscription Management (RHSM) to attach your Red Hat Ansible Automation Platform subscription. Attaching your Red Hat Ansible Automation Platform subscription allows you to access subscription-only resources necessary to install ansible-builder . Once you attach your subscription, the necessary repository for ansible-builder is automatically enabled. Note You must have valid subscriptions attached on the host before installing ansible-builder . Procedure In your terminal, run the following command to install Ansible Builder and activate your Ansible Automation Platform repo: # dnf install --enablerepo ansible-automation-platform-2.3-for-rhel-8-x86_64-rpms ansible-builder 5.2. Installing Automation content navigator on RHEL from an RPM You can install Automation content navigator on Red Hat Enterprise Linux (RHEL) from an RPM. Prerequisites You have installed RHEL 8 or later. You registered your system with Red Hat Subscription Manager. Note Ensure that you only install the navigator matching your current Red Hat Ansible Automation Platform environment. Procedure Attach the Red Hat Ansible Automation Platform SKU: USD subscription-manager attach --pool=<sku-pool-id> Install Automation content navigator with the following command: # dnf install --enablerepo=ansible-automation-platform-2.3-for-rhel-8-x86_64-rpms ansible-navigator Verification Verify your Automation content navigator installation: USD ansible-navigator --help The following example demonstrates a successful installation: 5.3. Downloading base automation execution environments Base images that ship with Ansible Automation Platform 2.0 are hosted on the Red Hat Ecosystem Catalog (registry.redhat.io). Prerequisites You have a valid Red Hat Ansible Automation Platform subscription. Procedure Log in to registry.redhat.io USD podman login registry.redhat.io Pull the base images from the registry USD podman pull registry.redhat.io/aap/<image name>
[ "dnf install --enablerepo ansible-automation-platform-2.3-for-rhel-8-x86_64-rpms ansible-builder", "subscription-manager attach --pool=<sku-pool-id>", "dnf install --enablerepo=ansible-automation-platform-2.3-for-rhel-8-x86_64-rpms ansible-navigator", "ansible-navigator --help", "podman login registry.redhat.io", "podman pull registry.redhat.io/aap/<image name>" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_creator_guide/setting-up-dev-environment
Chapter 2. Eviction [policy/v1]
Chapter 2. Eviction [policy/v1] Description Eviction evicts a pod from its node subject to certain policies and safety constraints. This is a subresource of Pod. A request to cause such an eviction is created by POSTing to ... /pods/<pod name>/evictions. Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources deleteOptions DeleteOptions DeleteOptions may be provided kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta ObjectMeta describes the pod that is being evicted. 2.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/pods/{name}/eviction POST : create eviction of a Pod 2.2.1. /api/v1/namespaces/{namespace}/pods/{name}/eviction Table 2.1. Global path parameters Parameter Type Description name string name of the Eviction Table 2.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create eviction of a Pod Table 2.3. Body parameters Parameter Type Description body Eviction schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Eviction schema 201 - Created Eviction schema 202 - Accepted Eviction schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/policy_apis/eviction-policy-v1
9.10. Balloon Driver
9.10. Balloon Driver The balloon driver allows guests to express to the hypervisor how much memory they require. The balloon driver allows the host to efficiently allocate and memory to the guest and allow free memory to be allocated to other guests and processes. Guests using the balloon driver can mark sections of the guest's RAM as not in use (balloon inflation). The hypervisor can free the memory and use the memory for other host processes or other guests on that host. When the guest requires the freed memory again, the hypervisor can reallocate RAM to the guest (balloon deflation).
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/balloon_driver
Chapter 2. Preparing for Red Hat Quay (high availability)
Chapter 2. Preparing for Red Hat Quay (high availability) Note This procedure presents guidance on how to set up a highly available, production-quality deployment of Red Hat Quay. 2.1. Prerequisites Here are a few things you need to know before you begin the Red Hat Quay high availability deployment: Either Postgres or MySQL can be used to provide the database service. Postgres was chosen here as the database because it includes the features needed to support Clair security scanning. Other options include: Crunchy Data PostgreSQL Operator: Although not supported directly by Red Hat, the CrunchDB Operator is available from Crunchy Data for use with Red Hat Quay. If you take this route, you should have a support contract with Crunchy Data and work directly with them for usage guidance or issues relating to the operator and their database. If your organization already has a high-availability (HA) database, you can use that database with Red Hat Quay. See the Red Hat Quay Support Policy for details on support for third-party databases and other components. Ceph Object Gateway (also called RADOS Gateway) is one example of a product that can provide the object storage needed by Red Hat Quay. If you want your Red Hat Quay setup to do geo-replication, Ceph Object Gateway or other supported object storage is required. For cloud installations, you can use any of the following cloud object storage: Amazon S3 (see S3 IAM Bucket Policy for details on configuring an S3 bucket policy for Quay) Azure Blob Storage Google Cloud Storage Ceph Object Gateway OpenStack Swift CloudFront + S3 NooBaa S3 Storage The haproxy server is used in this example, although you can use any proxy service that works for your environment. Number of systems: This procedure uses seven systems (physical or virtual) that are assigned with the following tasks: A: db01: Load balancer and database : Runs the haproxy load balancer and a Postgres database. Note that these components are not themselves highly available, but are used to indicate how you might set up your own load balancer or production database. B: quay01, quay02, quay03: Quay and Redis : Three (or more) systems are assigned to run the Quay and Redis services. C: ceph01, ceph02, ceph03, ceph04, ceph05: Ceph : Three (or more) systems provide the Ceph service, for storage. If you are deploying to a cloud, you can use the cloud storage features described earlier. This procedure employs an additional system for Ansible (ceph05) and one for a Ceph Object Gateway (ceph04). Each system should have the following attributes: Red Hat Enterprise Linux (RHEL) 8: Obtain the latest Red Hat Enterprise Linux 8 server media from the Downloads page and follow the installation instructions available in the Product Documentation for Red Hat Enterprise Linux 9 . Valid Red Hat Subscription : Configure a valid Red Hat Enterprise Linux 8 server subscription. CPUs : Two or more virtual CPUs RAM : 4GB for each A and B system; 8GB for each C system Disk space : About 20GB of disk space for each A and B system (10GB for the operating system and 10GB for docker storage). At least 30GB of disk space for C systems (or more depending on required container storage). 2.2. Using podman This document uses podman for creating and deploying containers. If you do not have podman available on your system, you should be able to use the equivalent docker commands. For more information on podman and related technologies, see Building, running, and managing Linux containers on Red Hat Enterprise Linux 8 . Note Podman is strongly recommended for highly available, production quality deployments of Red Hat Quay. Docker has not been tested with Red Hat Quay 3.9, and will be deprecated in a future release. 2.3. Setting up the HAProxy load balancer and the PostgreSQL database Use the following procedure to set up the HAProxy load balancer and the PostgreSQL database. Prerequisites You have installed the Podman or Docker CLI. Procedure On the first two systems, q01 and q02 , install the HAProxy load balancer and the PostgreSQL database. This configures HAProxy as the access point and load balancer for the following services running on other systems: Red Hat Quay (ports 80 and 443 on B systems) Redis (port 6379 on B systems) RADOS (port 7480 on C systems) Open all HAProxy ports in SELinux and selected HAProxy ports in the firewall: # setsebool -P haproxy_connect_any=on # firewall-cmd --permanent --zone=public --add-port=6379/tcp --add-port=7480/tcp success # firewall-cmd --reload success Configure the /etc/haproxy/haproxy.cfg to point to the systems and ports providing the Red Hat Quay, Redis and Ceph RADOS services. The following are examples of defaults and added frontend and backend settings: After the new haproxy.cfg file is in place, restart the HAProxy service by entering the following command: # systemctl restart haproxy Create a folder for the PostgreSQL database by entering the following command: USD mkdir -p /var/lib/pgsql/data Set the following permissions for the /var/lib/pgsql/data folder: USD chmod 777 /var/lib/pgsql/data Enter the following command to start the PostgreSQL database: USD sudo podman run -d --name postgresql_database \ -v /var/lib/pgsql/data:/var/lib/pgsql/data:Z \ -e POSTGRESQL_USER=quayuser -e POSTGRESQL_PASSWORD=quaypass \ -e POSTGRESQL_DATABASE=quaydb -p 5432:5432 \ registry.redhat.io/rhel8/postgresql-13:1-109 Note Data from the container will be stored on the host system in the /var/lib/pgsql/data directory. List the available extensions by entering the following command: USD sudo podman exec -it postgresql_database /bin/bash -c 'echo "SELECT * FROM pg_available_extensions" | /opt/rh/rh-postgresql96/root/usr/bin/psql' Example output name | default_version | installed_version | comment -----------+-----------------+-------------------+---------------------------------------- adminpack | 1.0 | | administrative functions for PostgreSQL ... Create the pg_trgm extension by entering the following command: USD sudo podman exec -it postgresql_database /bin/bash -c 'echo "CREATE EXTENSION IF NOT EXISTS pg_trgm;" | /opt/rh/rh-postgresql96/root/usr/bin/psql -d quaydb' Confirm that the pg_trgm has been created by entering the following command: USD sudo podman exec -it postgresql_database /bin/bash -c 'echo "SELECT * FROM pg_extension" | /opt/rh/rh-postgresql96/root/usr/bin/psql' Example output extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition ---------+----------+--------------+----------------+------------+-----------+-------------- plpgsql | 10 | 11 | f | 1.0 | | pg_trgm | 10 | 2200 | t | 1.3 | | (2 rows) Alter the privileges of the Postgres user quayuser and grant them the superuser role to give the user unrestricted access to the database: USD sudo podman exec -it postgresql_database /bin/bash -c 'echo "ALTER USER quayuser WITH SUPERUSER;" | /opt/rh/rh-postgresql96/root/usr/bin/psql' Example output ALTER ROLE If you have a firewalld service active on your system, run the following commands to make the PostgreSQL port available through the firewall: # firewall-cmd --permanent --zone=trusted --add-port=5432/tcp # firewall-cmd --reload Optional. If you do not have the postgres CLI package installed, install it by entering the following command: # yum install postgresql -y Use the psql command to test connectivity to the PostgreSQL database. Note To verify that you can access the service remotely, run the following command on a remote system. Example output Password for user test: psql (9.2.23, server 9.6.5) WARNING: psql version 9.2, server version 9.6. Some psql features might not work. Type "help" for help. test=> \q 2.4. Set Up Ceph For this Red Hat Quay configuration, we create a three-node Ceph cluster, with several other supporting nodes, as follows: ceph01, ceph02, and ceph03 - Ceph Monitor, Ceph Manager and Ceph OSD nodes ceph04 - Ceph RGW node ceph05 - Ceph Ansible administration node For details on installing Ceph nodes, see Installing Red Hat Ceph Storage on Red Hat Enterprise Linux . Once you have set up the Ceph storage cluster, create a Ceph Object Gateway (also referred to as a RADOS gateway). See Installing the Ceph Object Gateway for details. 2.4.1. Install each Ceph node On ceph01, ceph02, ceph03, ceph04, and ceph05, do the following: Review prerequisites for setting up Ceph nodes in Requirements for Installing Red Hat Ceph Storage . In particular: Decide if you want to use RAID controllers on OSD nodes . Decide if you want a separate cluster network for your Ceph Network Configuration . Prepare OSD storage (ceph01, ceph02, and ceph03 only). Set up the OSD storage on the three OSD nodes (ceph01, ceph02, and ceph03). See OSD Ansible Settings in Table 3.2 for details on supported storage types that you will enter into your Ansible configuration later. For this example, a single, unformatted block device ( /dev/sdb ), that is separate from the operating system, is configured on each of the OSD nodes. If you are installing on metal, you might want to add an extra hard drive to the machine for this purpose. Install Red Hat Enterprise Linux Server edition, as described in the RHEL 7 Installation Guide . Register and subscribe each Ceph node as described in the Registering Red Hat Ceph Storage Nodes . Here is how to subscribe to the necessary repos: Create an ansible user with root privilege on each node. Choose any name you like. For example: 2.4.2. Configure the Ceph Ansible node (ceph05) Log into the Ceph Ansible node (ceph05) and configure it as follows. You will need the ceph01, ceph02, and ceph03 nodes to be running to complete these steps. In the Ansible user's home directory create a directory to store temporary values created from the ceph-ansible playbook Enable password-less ssh for the ansible user. Run ssh-keygen on ceph05 (leave passphrase empty), then run and repeat ssh-copy-id to copy the public key to the Ansible user on ceph01, ceph02, and ceph03 systems: Install the ceph-ansible package: Create a symbolic between these two directories: Create copies of Ceph sample yml files to modify: Edit the copied group_vars/all.yml file. See General Ansible Settings in Table 3.1 for details. For example: Note that your network device and address range may differ. Edit the copied group_vars/osds.yml file. See the OSD Ansible Settings in Table 3.2 for details. In this example, the second disk device ( /dev/sdb ) on each OSD node is used for both data and journal storage: Edit the /etc/ansible/hosts inventory file to identify the Ceph nodes as Ceph monitor, OSD and manager nodes. In this example, the storage devices are identified on each node as well: Add this line to the /etc/ansible/ansible.cfg file, to save the output from each Ansible playbook run into your Ansible user's home directory: Check that Ansible can reach all the Ceph nodes you configured as your Ansible user: Run the ceph-ansible playbook (as your Ansible user): At this point, the Ansible playbook will check your Ceph nodes and configure them for the services you requested. If anything fails, make needed corrections and rerun the command. Log into one of the three Ceph nodes (ceph01, ceph02, or ceph03) and check the health of the Ceph cluster: On the same node, verify that monitoring is working using rados: 2.4.3. Install the Ceph Object Gateway On the Ansible system (ceph05), configure a Ceph Object Gateway to your Ceph Storage cluster (which will ultimately run on ceph04). See Installing the Ceph Object Gateway for details. 2.5. Set up Redis With Red Hat Enterprise Linux 8 server installed on each of the three Red Hat Quay systems (quay01, quay02, and quay03), install and start the Redis service as follows: Install / Deploy Redis : Run Redis as a container on each of the three quay0* systems: Check redis connectivity : You can use the telnet command to test connectivity to the redis service. Type MONITOR (to begin monitoring the service) and QUIT to exit: Note For more information on using podman and restarting containers, see the section "Using podman" earlier in this document.
[ "setsebool -P haproxy_connect_any=on firewall-cmd --permanent --zone=public --add-port=6379/tcp --add-port=7480/tcp success firewall-cmd --reload success", "#--------------------------------------------------------------------- common defaults that all the 'listen' and 'backend' sections will use if not designated in their block #--------------------------------------------------------------------- defaults mode tcp log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- main frontend which proxys to the backends #--------------------------------------------------------------------- frontend fe_http *:80 default_backend be_http frontend fe_https *:443 default_backend be_https frontend fe_redis *:6379 default_backend be_redis frontend fe_rdgw *:7480 default_backend be_rdgw backend be_http balance roundrobin server quay01 quay01:80 check server quay02 quay02:80 check server quay03 quay03:80 check backend be_https balance roundrobin server quay01 quay01:443 check server quay02 quay02:443 check server quay03 quay03:443 check backend be_rdgw balance roundrobin server ceph01 ceph01:7480 check server ceph02 ceph02:7480 check server ceph03 ceph03:7480 check backend be_redis server quay01 quay01:6379 check inter 1s server quay02 quay02:6379 check inter 1s server quay03 quay03:6379 check inter 1s", "systemctl restart haproxy", "mkdir -p /var/lib/pgsql/data", "chmod 777 /var/lib/pgsql/data", "sudo podman run -d --name postgresql_database -v /var/lib/pgsql/data:/var/lib/pgsql/data:Z -e POSTGRESQL_USER=quayuser -e POSTGRESQL_PASSWORD=quaypass -e POSTGRESQL_DATABASE=quaydb -p 5432:5432 registry.redhat.io/rhel8/postgresql-13:1-109", "sudo podman exec -it postgresql_database /bin/bash -c 'echo \"SELECT * FROM pg_available_extensions\" | /opt/rh/rh-postgresql96/root/usr/bin/psql'", "name | default_version | installed_version | comment -----------+-----------------+-------------------+---------------------------------------- adminpack | 1.0 | | administrative functions for PostgreSQL", "sudo podman exec -it postgresql_database /bin/bash -c 'echo \"CREATE EXTENSION IF NOT EXISTS pg_trgm;\" | /opt/rh/rh-postgresql96/root/usr/bin/psql -d quaydb'", "sudo podman exec -it postgresql_database /bin/bash -c 'echo \"SELECT * FROM pg_extension\" | /opt/rh/rh-postgresql96/root/usr/bin/psql'", "extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition ---------+----------+--------------+----------------+------------+-----------+-------------- plpgsql | 10 | 11 | f | 1.0 | | pg_trgm | 10 | 2200 | t | 1.3 | | (2 rows)", "sudo podman exec -it postgresql_database /bin/bash -c 'echo \"ALTER USER quayuser WITH SUPERUSER;\" | /opt/rh/rh-postgresql96/root/usr/bin/psql'", "ALTER ROLE", "firewall-cmd --permanent --zone=trusted --add-port=5432/tcp", "firewall-cmd --reload", "yum install postgresql -y", "psql -h localhost quaydb quayuser", "Password for user test: psql (9.2.23, server 9.6.5) WARNING: psql version 9.2, server version 9.6. Some psql features might not work. Type \"help\" for help. test=> \\q", "subscription-manager repos --disable=* subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-extras-rpms subscription-manager repos --enable=rhel-7-server-rhceph-3-mon-rpms subscription-manager repos --enable=rhel-7-server-rhceph-3-osd-rpms subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms", "USER_NAME=ansibleadmin useradd USDUSER_NAME -c \"Ansible administrator\" passwd USDUSER_NAME New password: ********* Retype new password: ********* cat << EOF >/etc/sudoers.d/admin admin ALL = (root) NOPASSWD:ALL EOF chmod 0440 /etc/sudoers.d/USDUSER_NAME", "USER_NAME=ansibleadmin sudo su - USDUSER_NAME [ansibleadmin@ceph05 ~]USD mkdir ~/ceph-ansible-keys", "USER_NAME=ansibleadmin sudo su - USDUSER_NAME [ansibleadmin@ceph05 ~]USD ssh-keygen [ansibleadmin@ceph05 ~]USD ssh-copy-id USDUSER_NAME@ceph01 [ansibleadmin@ceph05 ~]USD ssh-copy-id USDUSER_NAME@ceph02 [ansibleadmin@ceph05 ~]USD ssh-copy-id USDUSER_NAME@ceph03 [ansibleadmin@ceph05 ~]USD exit #", "yum install ceph-ansible", "ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars", "cd /usr/share/ceph-ansible cp group_vars/all.yml.sample group_vars/all.yml cp group_vars/osds.yml.sample group_vars/osds.yml cp site.yml.sample site.yml", "ceph_origin: repository ceph_repository: rhcs ceph_repository_type: cdn ceph_rhcs_version: 3 monitor_interface: eth0 public_network: 192.168.122.0/24", "osd_scenario: collocated devices: - /dev/sdb dmcrypt: true osd_auto_discovery: false", "[mons] ceph01 ceph02 ceph03 [osds] ceph01 devices=\"[ '/dev/sdb' ]\" ceph02 devices=\"[ '/dev/sdb' ]\" ceph03 devices=\"[ '/dev/sdb' ]\" [mgrs] ceph01 devices=\"[ '/dev/sdb' ]\" ceph02 devices=\"[ '/dev/sdb' ]\" ceph03 devices=\"[ '/dev/sdb' ]\"", "retry_files_save_path = ~/", "USER_NAME=ansibleadmin sudo su - USDUSER_NAME [ansibleadmin@ceph05 ~]USD ansible all -m ping ceph01 | SUCCESS => { \"changed\": false, \"ping\": \"pong\" } ceph02 | SUCCESS => { \"changed\": false, \"ping\": \"pong\" } ceph03 | SUCCESS => { \"changed\": false, \"ping\": \"pong\" } [ansibleadmin@ceph05 ~]USD", "[ansibleadmin@ceph05 ~]USD cd /usr/share/ceph-ansible/ [ansibleadmin@ceph05 ~]USD ansible-playbook site.yml", "ceph health HEALTH_OK", "ceph osd pool create test 8 echo 'Hello World!' > hello-world.txt rados --pool test put hello-world hello-world.txt rados --pool test get hello-world fetch.txt cat fetch.txt Hello World!", "mkdir -p /var/lib/redis chmod 777 /var/lib/redis sudo podman run -d -p 6379:6379 -v /var/lib/redis:/var/lib/redis/data:Z registry.redhat.io/rhel8/redis-5", "yum install telnet -y telnet 192.168.122.99 6379 Trying 192.168.122.99 Connected to 192.168.122.99. Escape character is '^]'. MONITOR +OK +1525703165.754099 [0 172.17.0.1:43848] \"PING\" QUIT +OK Connection closed by foreign host." ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/deploy_red_hat_quay_-_high_availability/preparing_for_red_hat_quay_high_availability
Chapter 4. System requirements
Chapter 4. System requirements Use this information when planning your Red Hat Ansible Automation Platform installations and designing automation mesh topologies that fit your use case. Prerequisites You can obtain root access either through the sudo command, or through privilege escalation. For more on privilege escalation see Understanding privilege escalation . You can de-escalate privileges from root to users such as: AWX, PostgreSQL, Event-Driven Ansible, or Pulp. You have configured an NTP client on all nodes. For more information, see Configuring NTP server using Chrony . 4.1. Red Hat Ansible Automation Platform system requirements Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform. Table 4.1. Base system Requirement Required Notes Subscription Valid Red Hat Ansible Automation Platform OS Red Hat Enterprise Linux 8.8 or later 64-bit (x86, ppc64le, s390x, aarch64), or Red Hat Enterprise Linux 9.0 or later 64-bit (x86, ppc64le, s390x, aarch64) Red Hat Ansible Automation Platform is also supported on OpenShift, see Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform for more information. Ansible-core Ansible-core version 2.14 or later Ansible Automation Platform includes execution environments that contain ansible-core 2.15. Python 3.9 or later Browser A currently supported version of Mozilla FireFox or Google Chrome Database PostgreSQL version 13 The following are necessary for you to work with project updates and collections: Ensure that the network ports and protocols listed in Table 5.3. Automation Hub are available for successful connection and download of collections from automation hub or Ansible Galaxy server. Disable SSL inspection either when using self-signed certificates or for the Red Hat domains. Note The requirements for systems managed by Ansible Automation Platform are the same as for Ansible. See Installing Ansible in the Ansible Community Documentation. Additional notes for Red Hat Ansible Automation Platform requirements Red Hat Ansible Automation Platform depends on Ansible Playbooks and requires the installation of the latest stable version of ansible-core. You can download ansible-core manually or download it automatically as part of your installation of Red Hat Ansible Automation Platform. For new installations, automation controller installs the latest release package of ansible-core. If performing a bundled Ansible Automation Platform installation, the installation setup.sh script attempts to install ansible-core (and its dependencies) from the bundle for you. If you have installed Ansible manually, the Ansible Automation Platform installation setup.sh script detects that Ansible has been installed and does not attempt to reinstall it. Note You must install Ansible using a package manager such as dnf , and the latest stable version of the package manager must be installed for Red Hat Ansible Automation Platform to work properly. Ansible version 2.14 is required for versions 2.4 and later. 4.2. Automation controller system requirements Automation controller is a distributed system, where different software components can be co-located or deployed across multiple compute nodes. In the installer, four node types are provided as abstractions to help you design the topology appropriate for your use case: control, hybrid, execution, and hop nodes. Use the following recommendations for node sizing: Note On control and hybrid nodes, allocate a minimum of 20 GB to /var/lib/awx for execution environment storage. Execution nodes Execution nodes run automation. Increase memory and CPU to increase capacity for running more forks. Note The RAM and CPU resources stated might not be required for packages installed on an execution node but are the minimum recommended to handle the job load for a node to run an average number of jobs simultaneously. Recommended RAM and CPU node sizes are not supplied. The required RAM or CPU depends directly on the number of jobs you are running in that environment. For further information about required RAM and CPU levels, see Performance tuning for automation controller . Table 4.2. Execution nodes Requirement Minimum required RAM 16 GB CPUs 4 Local disk 40GB minimum Control nodes Control nodes process events and run cluster jobs including project updates and cleanup jobs. Increasing CPU and memory can help with job event processing. Table 4.3. Control nodes Requirement Minimum required RAM 16 GB CPUs 4 Local disk 40GB minimum with at least 20GB available under /var/lib/awx Storage volume must be rated for a minimum baseline of 1500 IOPS Projects are stored on control and hybrid nodes, and for the duration of jobs, are also stored on execution nodes. If the cluster has many large projects, consider doubling the GB in /var/lib/awx/projects, to avoid disk space errors. Hop nodes Hop nodes serve to route traffic from one part of the automation mesh to another (for example, a hop node could be a bastion host into another network). RAM can affect throughput, CPU activity is low. Network bandwidth and latency are generally a more important factor than either RAM or CPU. Table 4.4. Hop nodes Requirement Minimum required RAM 16 GB CPUs 4 Local disk 40 GB Actual RAM requirements vary based on how many hosts automation controller will manage simultaneously (which is controlled by the forks parameter in the job template or the system ansible.cfg file). To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks and 2 GB reservation for automation controller. For more information, see Automation controller capacity determination and job impact . If forks is set to 400, 42 GB of memory is recommended. Automation controller hosts check if umask is set to 0022. If not, the setup fails. Set umask=0022 to avoid this error. A larger number of hosts can be addressed, but if the fork number is less than the total host count, more passes across the hosts are required. You can avoid these RAM limitations by using any of the following approaches: Use rolling updates. Use the provisioning callback system built into automation controller, where each system requesting configuration enters a queue and is processed as quickly as possible. In cases where automation controller is producing or deploying images such as AMIs. Additional resources For more information about obtaining an automation controller subscription, see Importing a subscription . For questions, contact Ansible support through the Red Hat Customer Portal . 4.3. Automation hub system requirements Automation hub enables you to discover and use new certified automation content from Red Hat Ansible and Certified Partners. On Ansible automation hub, you can discover and manage Ansible Collections, which are supported automation content developed by Red Hat and its partners for use cases such as cloud automation, network automation, and security automation. Automation hub has the following system requirements: Requirement Required Notes RAM 8 GB minimum 8 GB RAM (minimum and recommended for Vagrant trial installations) 8 GB RAM (minimum for external standalone PostgreSQL databases) For capacity based on forks in your configuration, see Automation controller capacity determination and job impact . CPUs 2 minimum For capacity based on forks in your configuration, see Automation controller capacity determination and job impact . Local disk 60 GB disk Dedicate a minimum of 40GB to /var for collection storage. Important Ansible automation execution nodes and automation hub system requirements are different and might not meet your network's needs. The general formula for determining how much memory you need is: Total control capacity = Total Memory in MB / Fork size in MB. Note Private automation hub If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, this can result in an installation which cannot be used as container registry without certificate issues. To avoid this, use the automationhub_main_url inventory variable with a value such as https://pah.example.com linking to the private automation hub node in the installation inventory file. This adds the external address to /etc/pulp/settings.py . This implies that you only want to use the external address. For information about inventory file variables, see Inventory file variables in the Red Hat Ansible Automation Platform Installation Guide . 4.3.1. High availability automation hub requirements Before deploying a high availability (HA) automation hub, ensure that you have a shared filesystem installed in your environment and that you have configured your network storage system, if applicable. 4.3.1.1. Required shared filesystem A high availability automation hub requires you to have a shared file system, such as NFS, already installed in your environment. Before you run the Red Hat Ansible Automation Platform installer, verify that you installed the /var/lib/pulp directory across your cluster as part of the shared file system installation. The Red Hat Ansible Automation Platform installer returns an error if /var/lib/pulp is not detected in one of your nodes, causing your high availability automation hub setup to fail. If you receive an error stating /var/lib/pulp is not detected in one of your nodes, ensure /var/lib/pulp is properly mounted in all servers and re-run the installer. 4.3.1.2. Installing firewalld for network storage If you intend to install a HA automation hub using a network storage on the automation hub nodes itself, you must first install and use firewalld to open the necessary ports as required by your shared storage system before running the Ansible Automation Platform installer. Install and configure firewalld by executing the following commands: Install the firewalld daemon: USD dnf install firewalld Add your network storage under <service> using the following command: USD firewall-cmd --permanent --add-service=<service> Note For a list of supported services, use the USD firewall-cmd --get-services command Reload to apply the configuration: USD firewall-cmd --reload 4.4. Event-Driven Ansible controller system requirements The Event-Driven Ansible controller is a single-node system capable of handling a variable number of long-running processes (such as rulebook activations) on-demand, depending on the number of CPU cores. Use the following minimum requirements to run, by default, a maximum of 12 simultaneous activations: Requirement Required RAM 16 GB CPUs 4 Local disk 40 GB minimum Important If you are running Red Hat Enterprise Linux 8 and want to set your memory limits, you must have cgroup v2 enabled before you install Event-Driven Ansible. For specific instructions, see the Knowledge-Centered Support (KCS) article, Ansible Automation Platform Event-Driven Ansible controller for Red Hat Enterprise Linux 8 requires cgroupv2 . When you activate an Event-Driven Ansible rulebook under standard conditions, it uses about 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of your rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that your maximum number of activations is based on the capacity of your resources. See Single automation controller, single automation hub, and single Event-Driven Ansible controller node with external (installer managed) database for an example on setting Event-Driven Ansible controller maximum running activations. 4.5. PostgreSQL requirements Red Hat Ansible Automation Platform uses PostgreSQL 13. PostgreSQL user passwords are hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database. To determine if your automation controller instance has access to the database, you can do so with the command, awx-manage check_db command. Table 4.5. Database Service Required Notes Database 20 GB dedicated hard disk space 4 CPUs 16 GB RAM 150 GB+ recommended Storage volume must be rated for a high baseline IOPS (1500 or more). All automation controller data is stored in the database. Database storage increases with the number of hosts managed, number of jobs run, number of facts stored in the fact cache, and number of tasks in any individual job. For example, a playbook run every hour (24 times a day) across 250 hosts, with 20 tasks, will store over 800000 events in the database every week. If not enough space is reserved in the database, the old job runs and facts must be cleaned on a regular basis. For more information, see Management Jobs in the Automation Controller Administration Guide . Note PostgreSQL versions older than v10 might not have ICU support. You must build your PostgreSQL server with ICU support or you may encounter unexpected errors. PostgreSQL configurations Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the Red Hat Ansible Automation Platform installer. When the Ansible Automation Platform installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. For more information about the settings you can use to improve database performance, see Database Settings . Additional resources For more information about tuning your PostgreSQL server, see the PostgreSQL documentation . 4.5.1. Setting up an external (customer supported) database Important When using an external database with Ansible Automation Platform, you must create and maintain that database. Ensure that you clear your external database when uninstalling Ansible Automation Platform. To create a database, user and password on an external PostgreSQL compliant database for use with automation controller, use the following procedure. Procedure Install and then connect to a PostgreSQL compliant database server with superuser privileges. # psql -h <db.example.com> -U superuser -p 5432 -d postgres <Password for user superuser>: Where: -h hostname --host=hostname Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix-domain socket. -d dbname --dbname=dbname Specifies the name of the database to connect to. This is equivalent to specifying dbname as the first non-option argument on the command line. The dbname can be a connection string. If so, connection string parameters override any conflicting command line options. -U username --username=username Connect to the database as the user username instead of the default. (You must have permission to do so.) Create the user, database, and password with the createDB or administrator role assigned to the user. For further information, see Database Roles . Add the database credentials and host details to the automation controller inventory file as an external database. The default values are used in the following example. [database] pg_host='db.example.com' pg_port=5432 pg_database='awx' pg_username='awx' pg_password='redhat' Run the installer. If you are using a PostgreSQL database with automation controller, the database is owned by the connecting user and must have a createDB or administrator role assigned to it. Check that you are able to connect to the created database with the user, password and database name. Check the permission of the user, the user should have the createDB or administrator role. Note During this procedure, you must check the External Database coverage. For further information, see https://access.redhat.com/articles/4010491 4.5.2. Enabling the hstore extension for the automation hub PostgreSQL database From Ansible Automation Platform 2.4, the database migration script uses hstore fields to store information, therefore the hstore extension to the automation hub PostgreSQL database must be enabled. This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server. If the PostgreSQL database is external, you must enable the hstore extension to the automation hub PostreSQL database manually before automation hub installation. If the hstore extension is not enabled before automation hub installation, a failure is raised during database migration. Procedure Check if the extension is available on the PostgreSQL server (automation hub database). USD psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'" Where the default value for <automation hub database> is automationhub . Example output with hstore available : name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row) Example output with hstore not available : name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows) On a RHEL based server, the hstore extension is included in the postgresql-contrib RPM package, which is not installed automatically when installing the PostgreSQL server RPM package. To install the RPM package, use the following command: dnf install postgresql-contrib Create the hstore PostgreSQL extension on the automation hub database with the following command: USD psql -d <automation hub database> -c "CREATE EXTENSION hstore;" The output of which is: CREATE EXTENSION In the following output, the installed_version field contains the hstore extension used, indicating that hstore is enabled. name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row) 4.5.3. Benchmarking storage performance for the Ansible Automation Platform PostgreSQL database Check whether the minimum Ansible Automation Platform PostgreSQL database requirements are met by using the Flexible I/O Tester (FIO) tool. FIO is a tool used to benchmark read and write IOPS performance of the storage system. Prerequisites You have installed the Flexible I/O Tester ( fio ) storage performance benchmarking tool. To install fio , run the following command as the root user: # yum -y install fio You have adequate disk space to store the fio test data log files. The examples shown in the procedure require at least 60GB disk space in the /tmp directory: numjobs sets the number of jobs run by the command. size=10G sets the file size generated by each job. You have adjusted the value of the size parameter. Adjusting this value reduces the amount of test data. Procedure Run a random write test: USD fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G \ --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \ --verify=0 --bs=4K --iodepth=64 --rw=randwrite \ --group_reporting=1 > /tmp/fio_benchmark_write_iops.log \ 2>> /tmp/fio_write_iops_error.log Run a random read test: USD fio --name=read_iops --directory=/tmp \ --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s \ --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread \ --group_reporting=1 > /tmp/fio_benchmark_read_iops.log \ 2>> /tmp/fio_read_iops_error.log Review the results: In the log files written by the benchmark commands, search for the line beginning with iops . This line shows the minimum, maximum, and average values for the test. The following example shows the line in the log file for the random read test: USD cat /tmp/fio_benchmark_read_iops.log read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 [...] iops : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360 [...] You must review, monitor, and revisit the log files according to your own business requirements, application workloads, and new demands.
[ "dnf install firewalld", "firewall-cmd --permanent --add-service=<service>", "firewall-cmd --reload", "psql -h <db.example.com> -U superuser -p 5432 -d postgres <Password for user superuser>:", "-h hostname --host=hostname", "-d dbname --dbname=dbname", "-U username --username=username", "[database] pg_host='db.example.com' pg_port=5432 pg_database='awx' pg_username='awx' pg_password='redhat'", "psql -d <automation hub database> -c \"SELECT * FROM pg_available_extensions WHERE name='hstore'\"", "name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)", "name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)", "dnf install postgresql-contrib", "psql -d <automation hub database> -c \"CREATE EXTENSION hstore;\"", "CREATE EXTENSION", "name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)", "yum -y install fio", "fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randwrite --group_reporting=1 > /tmp/fio_benchmark_write_iops.log 2>> /tmp/fio_write_iops_error.log", "fio --name=read_iops --directory=/tmp --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread --group_reporting=1 > /tmp/fio_benchmark_read_iops.log 2>> /tmp/fio_read_iops_error.log", "cat /tmp/fio_benchmark_read_iops.log read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 [...] iops : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360 [...]" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_planning_guide/platform-system-requirements
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_operations_guide/providing-feedback
Chapter 11. Installing a cluster on AWS China
Chapter 11. Installing a cluster on AWS China In OpenShift Container Platform version 4.15, you can install a cluster to the following Amazon Web Services (AWS) China regions: cn-north-1 (Beijing) cn-northwest-1 (Ningxia) 11.1. Prerequisites You have an Internet Content Provider (ICP) license. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. 11.2. Installation requirements Red Hat does not publish a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for the AWS China regions. Before you can install the cluster, you must: Upload a custom RHCOS AMI. Manually create the installation configuration file ( install-config.yaml ). Specify the AWS region, and the accompanying custom AMI, in the installation configuration file. You cannot use the OpenShift Container Platform installation program to create the installation configuration file. The installer does not list an AWS region without native support for an RHCOS AMI. 11.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 11.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network. Note AWS China does not support a VPN connection between the VPC and your network. For more information about the Amazon VPC service in the Beijing and Ningxia regions, see Amazon Virtual Private Cloud in the AWS China documentation. 11.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 11.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 11.5. About using a custom VPC In OpenShift Container Platform 4.15, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 11.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com.cn elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com.cn elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 11.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 11.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 11.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 11.5.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 11.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 11.7. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 1 The AWS profile name that holds your AWS credentials, like beijingadmin . Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 1 The AWS region, like cn-north-1 . Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 The RHCOS VMDK version, like 4.15.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 11.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 11.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have uploaded a custom RHCOS AMI. You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 11.9.1. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{"auths": ...}' 24 1 12 14 17 24 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 18 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 19 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 20 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 11.9.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 11.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 11.9.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 11.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 11.9.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 11.2. Machine types based on 64-bit ARM architecture c6g.* m6g.* r8g.* 11.9.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 11.9.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 11.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 11.11. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 11.11.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 11.11.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 11.11.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 11.3. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 11.4. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 11.11.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 11.11.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 11.11.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 11.11.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 11.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 11.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 11.14. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 11.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. See About remote health monitoring for more information about the Telemetry service. 11.16. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_aws/installing-aws-china-region
Installing on Nutanix
Installing on Nutanix OpenShift Container Platform 4.14 Installing OpenShift Container Platform on Nutanix Red Hat OpenShift Documentation Team
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIPs: - 10.40.142.7 12 defaultMachinePlatform: bootType: Legacy categories: 13 - key: <category_key_name> value: <category_value> project: 14 type: name name: <project_name> ingressVIPs: - 10.40.142.8 15 prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api", "ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1", "openshift-install create manifests --dir <installation_directory> 1", "cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests", "ls ./<installation_directory>/manifests", "cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml", "cd <path_to_installation_directory>/manifests", "apiVersion: v1 kind: ConfigMap metadata: name: cloud-conf namespace: openshift-cloud-controller-manager data: cloud.conf: \"{ \\\"prismCentral\\\": { \\\"address\\\": \\\"<prism_central_FQDN/IP>\\\", 1 \\\"port\\\": 9440, \\\"credentialRef\\\": { \\\"kind\\\": \\\"Secret\\\", \\\"name\\\": \\\"nutanix-credentials\\\", \\\"namespace\\\": \\\"openshift-cloud-controller-manager\\\" } }, \\\"topologyDiscovery\\\": { \\\"type\\\": \\\"Prism\\\", \\\"topologyCategories\\\": null }, \\\"enableCustomLabeling\\\": true }\"", "spec: cloudConfig: key: config name: cloud-provider-config", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install coreos print-stream-json", "\"nutanix\": { \"release\": \"411.86.202210041459-0\", \"formats\": { \"qcow2\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2\", \"sha256\": \"42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b\"", "platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api", "ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1", "openshift-install create manifests --dir <installation_directory> 1", "cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests", "ls ./<installation_directory>/manifests", "cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc apply -f ./oc-mirror-workspace/results-<id>/", "oc get imagecontentsourcepolicy", "oc get catalogsource --all-namespaces", "apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "compute: platform: nutanix: categories: key:", "compute: platform: nutanix: categories: value:", "compute: platform: nutanix: project: type:", "compute: platform: nutanix: project: name: or uuid:", "compute: platform: nutanix: bootType:", "controlPlane: platform: nutanix: categories: key:", "controlPlane: platform: nutanix: categories: value:", "controlPlane: platform: nutanix: project: type:", "controlPlane: platform: nutanix: project: name: or uuid:", "platform: nutanix: defaultMachinePlatform: categories: key:", "platform: nutanix: defaultMachinePlatform: categories: value:", "platform: nutanix: defaultMachinePlatform: project: type:", "platform: nutanix: defaultMachinePlatform: project: name: or uuid:", "platform: nutanix: defaultMachinePlatform: bootType:", "platform: nutanix: apiVIP:", "platform: nutanix: ingressVIP:", "platform: nutanix: prismCentral: endpoint: address:", "platform: nutanix: prismCentral: endpoint: port:", "platform: nutanix: prismCentral: password:", "platform: nutanix: prismCentral: username:", "platform: nutanix: prismElements: endpoint: address:", "platform: nutanix: prismElements: endpoint: port:", "platform: nutanix: prismElements: uuid:", "platform: nutanix: subnetUUIDs:", "platform: nutanix: clusterOSImage:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/installing_on_nutanix/index
Chapter 1. Backing up your Red Hat OpenStack Platform cluster by using the Snapshot and Revert tool
Chapter 1. Backing up your Red Hat OpenStack Platform cluster by using the Snapshot and Revert tool Snapshots preserve the original disk state of your Red Hat OpenStack Platform (RHOSP) cluster before you perform an upgrade or an update from RHOSP 17.1 or later. You can then remove or revert the snapshots depending on the results. For example, if an upgrade completed successfully and you do not need the snapshots anymore, remove them from your nodes. If an upgrade fails, you can revert the snapshots, assess any errors, and start the upgrade procedure again. A revert leaves the disks of all the nodes exactly as they were when the snapshot was taken. The RHOSP Snapshot and Revert tool is based on the Logical Volume Manager (LVM) snapshot functionality and is only intended to revert an unsuccessful upgrade or update. Important The snapshots are stored on the same hard drives as the data you have stored on your disks. As a result, the Snapshot and Revert tool does not prevent data loss in cases of hardware failure, data center failure, or inaccessible nodes. You can take snapshots of Controller nodes and Compute nodes. Taking snapshots of the undercloud is not supported. 1.1. Creating a snapshot of Controller and Compute nodes Create a snapshot of your Controller and Compute nodes before performing an upgrade or update. You can then remove or revert the snapshots depending on the results of those actions. Note You can create only one snapshot of your Controller and Compute nodes. To create another snapshot, you must remove or revert your snapshot. Prerequisites You have LVM enabled on the node. The following default set of LVM logical volumes defined by a RHOSP installation are present: /dev/vg/lv_audit /dev/vg/lv_home /dev/vg/lv_log /dev/vg/lv_root /dev/vg/lv_srv /dev/vg/lv_var You can run the lvs , lvscan , or lvdisplay commands to confirm whether your environment includes these prerequisites before you make changes to the node disks. Note These prerequisites are included with the default installation of a 17.1 cluster. However, if you upgraded to RHOSP 17.1 from an earlier RHOSP version, your control plane does not include these prerequisites because they require reformatting of the disk. Procedure Log in to the undercloud as the stack user. Source the stackrc undercloud credentials file: If you have not done so before, extract the static Ansible inventory file from the location in which it was saved during installation: Replace <stack> with the name of your stack. By default, the name of the stack is overcloud . Take the snapshots: If your upgrade or update was successful, remove the snapshots: Important Removing snapshots is a critical action. Remove the snapshots if you do not intend to revert the nodes, for example, after an upgrade completes successfully. If you retain snapshots on the nodes for too long, they degrade disk I/O performance. If your upgrade or update failed, revert the snapshots: Reboot each node that you reverted so the changes are applied to the filesystem. The revert option automatically deletes the snapshots.
[ "[stack@undercloud ~]USD source stackrc (undercloud) [stack@undercloud ~]USD", "(undercloud) [stack@undercloud ~]USD cp ~/overcloud-deploy/<stack> /tripleo-ansible-inventory.yaml ~/tripleo-inventory.yaml", "(undercloud) [stack@undercloud ~]USD openstack overcloud backup snapshot --inventory ~/tripleo-inventory.yaml", "(undercloud) [stack@undercloud ~]USD openstack overcloud backup snapshot --remove --inventory ~/tripleo-inventory.yaml", "(undercloud) [stack@undercloud ~]USD openstack overcloud backup snapshot --revert --inventory ~/tripleo-inventory.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/backing_up_and_restoring_the_undercloud_and_control_plane_nodes/assembly_snapshot-and-revert-appendix_snapshot-and-revert-appendix
Chapter 1. Support overview
Chapter 1. Support overview Red Hat offers cluster administrators tools for gathering data for your cluster, monitoring, and troubleshooting. 1.1. Get support Get support : Visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources. 1.2. Remote health monitoring issues Remote health monitoring issues : OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. Red Hat uses this data to understand and resolve issues in connected cluster . Similar to connected clusters, you can Use remote health monitoring in a restricted network . OpenShift Container Platform collects data and monitors health using the following: Telemetry : The Telemetry Client gathers and uploads the metrics values to Red Hat every four minutes and thirty seconds. Red Hat uses this data to: Monitor the clusters. Roll out OpenShift Container Platform upgrades. Improve the upgrade experience. Insight Operator : By default, OpenShift Container Platform installs and enables the Insight Operator, which reports configuration and component failure status every two hours. The Insight Operator helps to: Identify potential cluster issues proactively. Provide a solution and preventive action in Red Hat OpenShift Cluster Manager. You can review telemetry information . If you have enabled remote health reporting, Use Insights to identify issues . You can optionally disable remote health reporting. 1.3. Gather data about your cluster Gather data about your cluster : Red Hat recommends gathering your debugging information when opening a support case. This helps Red Hat Support to perform a root cause analysis. A cluster administrator can use the following to gather data about your cluster: The must-gather tool : Use the must-gather tool to collect information about your cluster and to debug the issues. sosreport : Use the sosreport tool to collect configuration details, system information, and diagnostic data for debugging purposes. Cluster ID : Obtain the unique identifier for your cluster, when providing information to Red Hat Support. Bootstrap node journal logs : Gather bootkube.service journald unit logs and container logs from the bootstrap node to troubleshoot bootstrap-related issues. Cluster node journal logs : Gather journald unit logs and logs within /var/log on individual cluster nodes to troubleshoot node-related issues. A network trace : Provide a network packet trace from a specific OpenShift Container Platform cluster node or a container to Red Hat Support to help troubleshoot network-related issues. Diagnostic data : Use the redhat-support-tool command to gather(?) diagnostic data about your cluster. 1.4. Troubleshooting issues A cluster administrator can monitor and troubleshoot the following OpenShift Container Platform component issues: Installation issues : OpenShift Container Platform installation proceeds through various stages. You can perform the following: Monitor the installation stages. Determine at which stage installation issues occur. Investigate multiple installation issues. Gather logs from a failed installation. Node issues : A cluster administrator can verify and troubleshoot node-related issues by reviewing the status, resource usage, and configuration of a node. You can query the following: Kubelet's status on a node. Cluster node journal logs. Crio issues : A cluster administrator can verify CRI-O container runtime engine status on each cluster node. If you experience container runtime issues, perform the following: Gather CRI-O journald unit logs. Cleaning CRI-O storage. Operating system issues : OpenShift Container Platform runs on Red Hat Enterprise Linux CoreOS. If you experience operating system issues, you can investigate kernel crash procedures. Ensure the following: Enable kdump. Test the kdump configuration. Analyze a core dump. Network issues : To troubleshoot Open vSwitch issues, a cluster administrator can perform the following: Configure the Open vSwitch log level temporarily. Configure the Open vSwitch log level permanently. Display Open vSwitch logs. Operator issues : A cluster administrator can do the following to resolve Operator issues: Verify Operator subscription status. Check Operator pod health. Gather Operator logs. Pod issues : A cluster administrator can troubleshoot pod-related issues by reviewing the status of a pod and completing the following: Review pod and container logs. Start debug pods with root access. Source-to-image issues : A cluster administrator can observe the S2I stages to determine where in the S2I process a failure occurred. Gather the following to resolve Source-to-Image (S2I) issues: Source-to-Image diagnostic data. Application diagnostic data to investigate application failure. Storage issues : A multi-attach storage error occurs when the mounting volume on a new node is not possible because the failed node cannot unmount the attached volume. A cluster administrator can do the following to resolve multi-attach storage issues: Enable multiple attachments by using RWX volumes. Recover or delete the failed node when using an RWO volume. Monitoring issues : A cluster administrator can follow the procedures on the troubleshooting page for monitoring. If the metrics for your user-defined projects are unavailable or if Prometheus is consuming a lot of disk space, check the following: Investigate why user-defined metrics are unavailable. Determine why Prometheus is consuming a lot of disk space. Logging issues : A cluster administrator can follow the procedures in the "Support" and "Troubleshooting logging" sections to resolve logging issues: Viewing the status of the Red Hat OpenShift Logging Operator Viewing the status of logging components Troubleshooting logging alerts Collecting information about your logging environment by using the oc adm must-gather command OpenShift CLI ( oc ) issues : Investigate OpenShift CLI ( oc ) issues by increasing the log level.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/support/support-overview
Chapter 6. Upgrade Quay Bridge Operator
Chapter 6. Upgrade Quay Bridge Operator To upgrade the Quay Bridge Operator (QBO), change the Channel Subscription update channel in the Subscription tab to the desired channel. When upgrading QBO from version 3.5 to 3.7, a number of extra steps are required: You need to create a new QuayIntegration custom resource. This can be completed in the Web Console or from the command line. upgrade-quay-integration.yaml - apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration-new spec: clusterID: openshift 1 credentialsSecret: name: quay-integration namespace: openshift-operators insecureRegistry: false quayHostname: https://registry-quay-quay35.router-default.apps.cluster.openshift.com 1 Make sure that the clusterID matches the value for the existing QuayIntegration resource. Create the new QuayIntegration custom resource: USD oc create -f upgrade-quay-integration.yaml Delete the old QuayIntegration custom resource. Delete the old mutatingwebhookconfigurations : USD oc delete mutatingwebhookconfigurations.admissionregistration.k8s.io quay-bridge-operator
[ "- apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration-new spec: clusterID: openshift 1 credentialsSecret: name: quay-integration namespace: openshift-operators insecureRegistry: false quayHostname: https://registry-quay-quay35.router-default.apps.cluster.openshift.com", "oc create -f upgrade-quay-integration.yaml", "oc delete mutatingwebhookconfigurations.admissionregistration.k8s.io quay-bridge-operator" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/upgrade_red_hat_quay/qbo-operator-upgrade
Chapter 5. OAuthClient [oauth.openshift.io/v1]
Chapter 5. OAuthClient [oauth.openshift.io/v1] Description OAuthClient describes an OAuth client Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 5.1. Specification Property Type Description accessTokenInactivityTimeoutSeconds integer AccessTokenInactivityTimeoutSeconds overrides the default token inactivity timeout for tokens granted to this client. The value represents the maximum amount of time that can occur between consecutive uses of the token. Tokens become invalid if they are not used within this temporal window. The user will need to acquire a new token to regain access once a token times out. This value needs to be set only if the default set in configuration is not appropriate for this client. Valid values are: - 0: Tokens for this client never time out - X: Tokens time out if there is no activity for X seconds The current minimum allowed value for X is 300 (5 minutes) WARNING: existing tokens' timeout will not be affected (lowered) by changing this value accessTokenMaxAgeSeconds integer AccessTokenMaxAgeSeconds overrides the default access token max age for tokens granted to this client. 0 means no expiration. additionalSecrets array (string) AdditionalSecrets holds other secrets that may be used to identify the client. This is useful for rotation and for service account token validation apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources grantMethod string GrantMethod is a required field which determines how to handle grants for this client. Valid grant handling methods are: - auto: always approves grant requests, useful for trusted clients - prompt: prompts the end user for approval of grant requests, useful for third-party clients kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata redirectURIs array (string) RedirectURIs is the valid redirection URIs associated with a client respondWithChallenges boolean RespondWithChallenges indicates whether the client wants authentication needed responses made in the form of challenges instead of redirects scopeRestrictions array ScopeRestrictions describes which scopes this client can request. Each requested scope is checked against each restriction. If any restriction matches, then the scope is allowed. If no restriction matches, then the scope is denied. scopeRestrictions[] object ScopeRestriction describe one restriction on scopes. Exactly one option must be non-nil. secret string Secret is the unique secret associated with a client 5.1.1. .scopeRestrictions Description ScopeRestrictions describes which scopes this client can request. Each requested scope is checked against each restriction. If any restriction matches, then the scope is allowed. If no restriction matches, then the scope is denied. Type array 5.1.2. .scopeRestrictions[] Description ScopeRestriction describe one restriction on scopes. Exactly one option must be non-nil. Type object Property Type Description clusterRole object ClusterRoleScopeRestriction describes restrictions on cluster role scopes literals array (string) ExactValues means the scope has to match a particular set of strings exactly 5.1.3. .scopeRestrictions[].clusterRole Description ClusterRoleScopeRestriction describes restrictions on cluster role scopes Type object Required roleNames namespaces allowEscalation Property Type Description allowEscalation boolean AllowEscalation indicates whether you can request roles and their escalating resources namespaces array (string) Namespaces is the list of namespaces that can be referenced. * means any of them (including *) roleNames array (string) RoleNames is the list of cluster roles that can referenced. * means anything 5.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthclients DELETE : delete collection of OAuthClient GET : list or watch objects of kind OAuthClient POST : create an OAuthClient /apis/oauth.openshift.io/v1/watch/oauthclients GET : watch individual changes to a list of OAuthClient. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthclients/{name} DELETE : delete an OAuthClient GET : read the specified OAuthClient PATCH : partially update the specified OAuthClient PUT : replace the specified OAuthClient /apis/oauth.openshift.io/v1/watch/oauthclients/{name} GET : watch changes to an object of kind OAuthClient. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /apis/oauth.openshift.io/v1/oauthclients HTTP method DELETE Description delete collection of OAuthClient Table 5.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthClient Table 5.3. HTTP responses HTTP code Reponse body 200 - OK OAuthClientList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthClient Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body OAuthClient schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 202 - Accepted OAuthClient schema 401 - Unauthorized Empty 5.2.2. /apis/oauth.openshift.io/v1/watch/oauthclients HTTP method GET Description watch individual changes to a list of OAuthClient. deprecated: use the 'watch' parameter with a list operation instead. Table 5.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/oauth.openshift.io/v1/oauthclients/{name} Table 5.8. Global path parameters Parameter Type Description name string name of the OAuthClient HTTP method DELETE Description delete an OAuthClient Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthClient Table 5.11. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthClient Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthClient Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. Body parameters Parameter Type Description body OAuthClient schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 401 - Unauthorized Empty 5.2.4. /apis/oauth.openshift.io/v1/watch/oauthclients/{name} Table 5.17. Global path parameters Parameter Type Description name string name of the OAuthClient HTTP method GET Description watch changes to an object of kind OAuthClient. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/oauth_apis/oauthclient-oauth-openshift-io-v1
Assisted Installer for OpenShift Container Platform
Assisted Installer for OpenShift Container Platform Assisted Installer for OpenShift Container Platform 2023 Assisted Installer User Guide Red Hat Customer Content Services
[ "mkdir -p <working_directory>/auth", "cp kubeadmin <working_directory>/auth", "export KUBECONFIG=<your working directory>/auth/kubeconfig", "oc login -u kubeadmin -p <password>", "export OFFLINE_TOKEN=<copied_api_token>", "ocm login --token=\"USD{OFFLINE_TOKEN}\"", "export API_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )", "ocm login --token=\"USD{OFFLINE_TOKEN}\"", "export API_TOKEN=USD(ocm token)", "vim ~/.local/bin/refresh-token", "export API_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )", "chmod +x ~/.local/bin/refresh-token", "source refresh-token", "curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H \"Authorization: Bearer USD{API_TOKEN}\" | jq", "{ \"release_tag\": \"v2.11.3\", \"versions\": { \"assisted-installer\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-211\", \"assisted-installer-controller\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-266\", \"assisted-installer-service\": \"quay.io/app-sre/assisted-service:78d113a\", \"discovery-agent\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-195\" } }", "{\"auths\":{\"cloud.openshift.com\":", "{\\\"auths\\\":{\\\"cloud.openshift.com\\\":", "export PULL_SECRET=USD(cat ~/Downloads/pull-secret.txt | jq -R .)", "curl https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' 1 { \"name\": \"testcluster\", \"high_availability_mode\": \"None\", \"openshift_version\": \"4.11\", \"pull_secret\": USDpull_secret[0] | tojson, 2 \"base_dns_domain\": \"example.com\" } ')\"", "source refresh-token", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.11\", \"cpu_architecture\" : \"<architecture_name>\" 1 \"high_availability_mode\": <cluster_type>, 2 \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "cat << EOF > cluster.json { \"name\": \"testcluster\", \"openshift_version\": \"4.11\", \"high_availability_mode\": \"<cluster_type>\", \"base_dns_domain\": \"example.com\", \"pull_secret\": USDPULL_SECRET } EOF", "curl -s -X POST \"https://api.openshift.com/api/assisted-install/v2/clusters\" -d @./cluster.json -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.id'", "export CLUSTER_ID=<cluster_id>", "curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq", "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"ssh_public_key\": \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZrD4LMkAEeoU2vShhF8VM+cCZtVRgB7tqtsMxms2q3TOJZAgfuqReKYWm+OLOZTD+DO3Hn1pah/mU3u7uJfTUg4wEX0Le8zBu9xJVym0BVmSFkzHfIJVTn6SfZ81NqcalisGWkpmkKXVCdnVAX6RsbHfpGKk9YPQarmRCn5KzkelJK4hrSWpBPjdzkFXaIpf64JBZtew9XVYA3QeXkIcFuq7NBuUH9BonroPEmIXNOa41PUP1IWq3mERNgzHZiuU8Ks/pFuU5HCMvv4qbTOIhiig7vidImHPpqYT/TCkuVi5w0ZZgkkBeLnxWxH0ldrfzgFBYAxnpTU8Ih/4VhG538Ix1hxPaM6cXds2ic71mBbtbSrk+zjtNPaeYk1O7UpcCw4jjHspU/rVV/DY51D5gSiiuaFPBMucnYPgUxy4FMBFfGrmGLIzTKiLzcz0DiSz1jBeTQOX++1nz+KDLBD8CPdi5k4dq7lLkapRk85qdEvgaG5RlHMSPSS3wDrQ51fD8= user@hostname\" } ' | jq", "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt --arg cluster_id USD{CLUSTER_ID} ' { \"name\": \"testcluster-infra-env\", \"image_type\":\"full-iso\", \"cluster_id\": USDcluster_id, \"cpu_architecture\" : \"<architecture_name>\" 1 \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "cat << EOF > infra-envs.json { \"name\": \"testcluster-infra-env\", \"image_type\": \"full-iso\", \"cluster_id\": \"USDCLUSTER_ID\", \"pull_secret\": USDPULL_SECRET } EOF", "curl -s -X POST \"https://api.openshift.com/api/assisted-install/v2/infra-envs\" -d @./infra-envs.json -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.id'", "export INFRA_ENV_ID=<id>", "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"image_type\":\"minimal-iso\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq", "source refresh-token", "curl -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/downloads/image-url", "wget -O discovery.iso '<url>'", "source refresh-token", "curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '.[]|[.id,.requested_hostname] | join(\"|\")'", "curl https://api.stage.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/USD1/installer-args -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"args\": [ \"--append-karg\", \"rd.neednet=1\", \"--append-karg\", \"ip=10.14.6.3::10.14.6.1:255.255.255.0:master-0.boea3e06.lnxero1.boe:encbdd0:none\", \"--append-karg\", \"nameserver=10.14.6.1\", \"--append-karg\", \"ip=[fd00::3]::[fd00::1]:64::encbdd0:none\", \"--append-karg\", \"nameserver=[fd00::1]\", \"--append-karg\", \"zfcp.allow_lun_scan=0\", \"--append-karg\", \"rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1\", \"--append-karg\", \"rd.dasd=0.0.5235\" ] } ' | jq", "[ \"1062663e-7989-8b2d-7fbb-e6f4d5bb28e5\" ]", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \\ 1 -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" \"host_name\" : \"worker-1\" } ' | jq", "source refresh-token", "curl -H \"Authorization: Bearer USDAPI_TOKEN\" -X POST https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/actions/install | jq", "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"none\", \"mode\": \"tpmv2\" } } ' | jq", "tang-show-keys <port>", "1gYTN_LpU9ZMB35yn5IbADY5OQ0", "sudo dnf install jose", "sudo jose jwk thp -i /var/db/tang/<public_key>.jwk", "1gYTN_LpU9ZMB35yn5IbADY5OQ0", "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"all\", \"mode\": \"tang\", \"tang_servers\": \"[{\\\"url\\\":\\\"http://tang.example.com:7500\\\",\\\"thumbprint\\\":\\\"PLjNyRdGw03zlRoGjQYMahSZGu9\\\"},{\\\"url\\\":\\\"http://tang2.example.com:7500\\\",\\\"thumbprint\\\":\\\"XYjNyRdGw03zlRoGjQYMahSZGu3\\\"}]\" } } ' | jq", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.11\", \"cpu_architecture\" : \"x86_64\", \"base_dns_domain\": \"example.com\", \"olm_operators: [{\"name\": \"cnv\"}]\" \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.11\", \"cpu_architecture\" : \"x86_64\" \"base_dns_domain\": \"example.com\", \"olm_operators: [{\"name\": \"mce\"}]\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.11\", \"cpu_architecture\" : \"x86_64\", \"base_dns_domain\": \"example.com\", \"olm_operators: [{\"name\": \"odf\"}]\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.14\", \"cpu_architecture\" : \"x86_64\", \"base_dns_domain\": \"example.com\", \"olm_operators: [{\"name\": \"lvm\"}]\" \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "source refresh-token", "curl -s https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '[ .[] | { \"name\": .name, \"id\": .id } ]'", "[ { \"name\": \"lvmtest\", \"id\": \"475358f9-ed3a-442f-ab9e-48fd68bc8188\" 1 }, { \"name\": \"mcetest\", \"id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\" } ]", "export CLUSTER_ID=<cluster_id>", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"olm_operators\": [{\"name\": \"mce\"}, {\"name\": \"cnv\"}], 1 } ' | jq '.id'", "{ <various cluster properties>, \"monitored_operators\": [ { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"console\", \"operator_type\": \"builtin\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"cvo\", \"operator_type\": \"builtin\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"mce\", \"namespace\": \"multicluster-engine\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"multicluster-engine\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"cnv\", \"namespace\": \"openshift-cnv\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"hco-operatorhub\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"lvm\", \"namespace\": \"openshift-local-storage\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"local-storage-operator\", \"timeout_seconds\": 4200 } ], <more cluster properties>", "vim ~/ignition.conf", "{ \"ignition\": { \"version\": \"3.1.0\" } }", "openssl passwd -6", "{ \"ignition\": { \"version\": \"3.1.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"passwordHash\": \"USD6USDspamUSDM5LGSMGyVD.9XOboxcwrsnwNdF4irpJdAWy.1Ry55syyUiUssIzIAHaOrUHr2zg6ruD8YNBPW9kW0H8EnKXyc1\" } ] } }", "export IGNITION_FILE=~/ignition.conf", "jq -n --arg IGNITION \"USD(jq -c . USDIGNITION_FILE)\" '{ignition_config_override: USDIGNITION}' > discovery_ignition.json", "source refresh-token", "curl --header \"Authorization: Bearer USDAPI_TOKEN\" --header \"Content-Type: application/json\" -XPATCH -d @discovery_ignition.json https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID | jq", "dd if=<path_to_iso> of=<path_to_usb> status=progress", "source refresh-token", "curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.enabled_host_count'", "curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'", "[ \"1062663e-7989-8b2d-7fbb-e6f4d5bb28e5\" ]", "curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia", "curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1", "curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset", "curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/downloads/files?file_name=ipxe-script > ipxe-script", "#!ipxe initrd --name initrd http://api.openshift.com/api/assisted-images/images/<infra_env_id>/pxe-initrd?arch=x86_64&image_token=<token_string>&version=4.10 kernel http://api.openshift.com/api/assisted-images/boot-artifacts/kernel?arch=x86_64&version=4.10 initrd=initrd coreos.live.rootfs_url=http://api.openshift.com/api/assisted-images/boot-artifacts/rootfs?arch=x86_64&version=4.10 random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8\" boot", "awk '/^initrd /{print USDNF}' ipxe-script | curl -o initrd.img", "awk '/^kernel /{print USD2}' ipxe-script | curl -o kernel", "grep ^kernel ipxe-script | xargs -n1| grep ^coreos.live.rootfs_url | cut -d = -f 2- | curl -o rootfs.img", "#!ipxe set webserver http://192.168.0.1 initrd --name initrd USDwebserver/initrd.img kernel USDwebserver/kernel initrd=initrd coreos.live.rootfs_url=USDwebserver/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8\" boot", "random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8", "if [ USD{net_default_mac} == fa:1d:67:35:13:20 ]; then default=0 fallback=1 timeout=1 menuentry \"CoreOS (BIOS)\" { echo \"Loading kernel\" linux \"/rhcos/kernel.img\" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://9.114.98.8:8000/install/rootfs.img echo \"Loading initrd\" initrd \"/rhcos/initrd.img\" } fi", "source refresh-token", "curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'", "[ \"1062663e-7989-8b2d-7fbb-e6f4d5bb28e5\" ]", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" } ' | jq", "source refresh-token", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts | jq -r .[].validations_info | jq 'map(.[])'", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts | jq -r .[].validations_info | jq 'map(.[]) | map(select(.status==\"failure\" or .status==\"pending\")) | select(length>0)'", "source refresh-token", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID | jq -r .validations_info | jq 'map(.[])'", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID | jq -r .validations_info | jq '. | map(.[] | select(.status==\"failure\" or .status==\"pending\")) | select(length>0)'", "--- clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 ---", "--- { \"vip_dhcp_allocation\": true, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ], \"machine_networks\": [ { \"cidr\": \"192.168.127.0/24\" } ] } ---", "--- { \"api_vips\": [ { \"ip\": \"192.168.127.100\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" } ], \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ] } ---", "--- dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.126.1 next-hop-interface: eth0 table-id: 254 ---", "--- mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ] ---", "--- interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 ---", "--- interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: all_slaves_active: delivered miimon: \"140\" slaves: - eth0 - eth1 name: bond0 state: up type: bond ---", "--- jq -n --arg NMSTATE_YAML1 \"USD(cat server-a.yaml)\" --arg NMSTATE_YAML2 \"USD(cat server-b.yaml)\" '{ \"static_network_config\": [ { \"network_yaml\": USDNMSTATE_YAML1, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:2c:23:a5\", \"logical_nic_name\": \"eth0\"}, {\"mac_address\": \"02:00:00:68:73:dc\", \"logical_nic_name\": \"eth1\"}] }, { \"network_yaml\": USDNMSTATE_YAML2, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:9f:85:eb\", \"logical_nic_name\": \"eth1\"}, {\"mac_address\": \"02:00:00:c8:be:9b\", \"logical_nic_name\": \"eth0\"}] } ] }' >> /tmp/request-body.txt ---", "source refresh-token", "--- curl -H \"Content-Type: application/json\" -X PATCH -d @/tmp/request-body.txt -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID ---", "--- { \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] } ---", "--- { \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"api_vips\": [ { \"ip\": \"192.168.127.100\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7334\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7335\" } ], \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] } ---", "oc adm release info -o json | jq .metadata.metadata", "{ \"release.openshift.io/architecture\": \"multi\" }", "export API_URL=<api_url> 1", "export CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')", "export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDCLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": USDCLUSTER_ID, \"name\": \"<openshift_cluster_name>\" 2 }')", "CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{API_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')", "export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')", "INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{API_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')", "curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -r '.download_url'", "https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12", "curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1", "curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'", "2294ba03-c264-4f11-ac08-2f1bb2f8c296", "HOST_ID=<host_id> 1", "curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{API_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r", "{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }", "curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{API_TOKEN}\"", "curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'", "{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }", "curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'", "{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"<version-number>-multi\", 1 \"cpu_architecture\" : \"multi\" 2 \"high_availability_mode\": \"full\" 3 \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt --arg cluster_id USD{CLUSTER_ID} ' { \"name\": \"testcluster-infra-env\", \"image_type\":\"full-iso\", \"cluster_id\": USDcluster_id, \"cpu_architecture\" : \"x86_64\" \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"master\" } ' | jq", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.12\", \"cpu_architecture\" : \"arm64\" \"high_availability_mode\": \"full\" \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" } ' | jq", "oc get nodes -o wide", "oc get csr | grep Pending", "csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 4h42m v1.24.0+3882f8f worker-1 Ready worker 4h29m v1.24.0+3882f8f master-2 Ready master 4h43m v1.24.0+3882f8f master-3 Ready master 4h27m v1.24.0+3882f8f worker-4 Ready worker 4h30m v1.24.0+3882f8f master-5 Ready master 105s v1.24.0+3882f8f", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: custom-master3 namespace: openshift-machine-api annotations: spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api", "oc create -f <filename>", "oc apply -f <filename>", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/custom-master3 finalizers: - machine.machine.openshift.io generation: 3 labels: machine.openshift.io/cluster-api-cluster: test-day2-1-6qv96 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: custom-master3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed", "oc create -f <filename>", "oc apply -f <filename>", "#!/bin/bash Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. This script will link Machine object and Node object. This is needed in order to have IP address of the Node present in the status of the Machine. set -x set -e machine=\"USD1\" node=\"USD2\" if [ -z \"USDmachine\" -o -z \"USDnode\" ]; then echo \"Usage: USD0 MACHINE NODE\" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH=\"http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts\" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name=\"USD1\" url=\"USD2\" timeout=\"USD3\" shift 3 curl_opts=\"USD@\" echo -n \"Waiting for USDname to respond\" start_time=USD(date +%s) until curl -g -X GET \"USDurl\" \"USD{curl_opts[@]}\" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n \".\" curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo \"\\nTimed out waiting for USDname\" return 1 fi sleep 5 done echo \" Success!\" return 0 } wait_for_json oc_proxy \"USD{HOST_PROXY_API_PATH}\" 10 -H \"Accept: application/json\" -H \"Content-Type: application/json\" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo \"USDmachine_data\" | jq '.metadata.annotations[\"metal3.io/BareMetalHost\"]' | cut -f2 -d/ | sed 's/\"//g') if [ -z \"USDhost\" ]; then echo \"Machine USDmachine is not linked to a host yet.\" 1>&2 exit 1 fi The address structure on the host doesn't match the node, so extract the values we want into separate variables so we can build the patch we need. hostname=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"Hostname\") | .address' | sed 's/\"//g') ipaddr=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"InternalIP\") | .address' | sed 's/\"//g') host_patch=' { \"status\": { \"hardware\": { \"hostname\": \"'USD{hostname}'\", \"nics\": [ { \"ip\": \"'USD{ipaddr}'\", \"mac\": \"00:00:00:00:00:00\", \"model\": \"unknown\", \"speedGbps\": 10, \"vlanId\": 0, \"pxe\": true, \"name\": \"eth1\" } ], \"systemVendor\": { \"manufacturer\": \"Red Hat\", \"productName\": \"product name\", \"serialNumber\": \"\" }, \"firmware\": { \"bios\": { \"date\": \"04/01/2014\", \"vendor\": \"SeaBIOS\", \"version\": \"1.11.0-2.el7\" } }, \"ramMebibytes\": 0, \"storage\": [], \"cpu\": { \"arch\": \"x86_64\", \"model\": \"Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz\", \"clockMegahertz\": 2199.998, \"count\": 4, \"flags\": [] } } } } ' echo \"PATCHING HOST\" echo \"USD{host_patch}\" | jq . curl -s -X PATCH USD{HOST_PROXY_API_PATH}/USD{host}/status -H \"Content-type: application/merge-patch+json\" -d \"USD{host_patch}\" get baremetalhost -n openshift-machine-api -o yaml \"USD{host}\"", "bash link-machine-and-node.sh custom-master3 worker-5", "oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table", "+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+", "oc get clusteroperator etcd", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54m", "oc rsh -n openshift-etcd etcd-worker-0 etcdctl endpoint health", "192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755ms", "oc get Nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 6h20m v1.24.0+3882f8f worker-1 Ready worker 6h7m v1.24.0+3882f8f master-2 Ready master 6h20m v1.24.0+3882f8f master-3 Ready master 6h4m v1.24.0+3882f8f worker-4 Ready worker 6h7m v1.24.0+3882f8f master-5 Ready master 99m v1.24.0+3882f8f", "oc get ClusterOperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MSG authentication 4.11.5 True False False 5h57m baremetal 4.11.5 True False False 6h19m cloud-controller-manager 4.11.5 True False False 6h20m cloud-credential 4.11.5 True False False 6h23m cluster-autoscaler 4.11.5 True False False 6h18m config-operator 4.11.5 True False False 6h19m console 4.11.5 True False False 6h4m csi-snapshot-controller 4.11.5 True False False 6h19m dns 4.11.5 True False False 6h18m etcd 4.11.5 True False False 6h17m image-registry 4.11.5 True False False 6h7m ingress 4.11.5 True False False 6h6m insights 4.11.5 True False False 6h12m kube-apiserver 4.11.5 True False False 6h16m kube-controller-manager 4.11.5 True False False 6h16m kube-scheduler 4.11.5 True False False 6h16m kube-storage-version-migrator 4.11.5 True False False 6h19m machine-api 4.11.5 True False False 6h15m machine-approver 4.11.5 True False False 6h19m machine-config 4.11.5 True False False 6h18m marketplace 4.11.5 True False False 6h18m monitoring 4.11.5 True False False 6h4m network 4.11.5 True False False 6h20m node-tuning 4.11.5 True False False 6h18m openshift-apiserver 4.11.5 True False False 6h8m openshift-controller-manager 4.11.5 True False False 6h7m openshift-samples 4.11.5 True False False 6h12m operator-lifecycle-manager 4.11.5 True False False 6h18m operator-lifecycle-manager-catalog 4.11.5 True False False 6h19m operator-lifecycle-manager-pkgsvr 4.11.5 True False False 6h12m service-ca 4.11.5 True False False 6h19m storage 4.11.5 True False False 6h19m", "oc get ClusterVersion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5", "oc delete bmh -n openshift-machine-api custom-master3", "oc get machine -A", "NAMESPACE NAME PHASE AGE openshift-machine-api custom-master3 Running 14h openshift-machine-api test-day2-1-6qv96-master-0 Failed 20h openshift-machine-api test-day2-1-6qv96-master-1 Running 20h openshift-machine-api test-day2-1-6qv96-master-2 Running 20h openshift-machine-api test-day2-1-6qv96-worker-0-8w7vr Running 19h openshift-machine-api test-day2-1-6qv96-worker-0-rxddj Running 19h", "oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-0 machine.machine.openshift.io \"test-day2-1-6qv96-master-0\" deleted", "oc get nodes", "NAME STATUS ROLES AGE VERSION worker-1 Ready worker 19h v1.24.0+3882f8f master-2 Ready master 20h v1.24.0+3882f8f master-3 Ready master 19h v1.24.0+3882f8f worker-4 Ready worker 19h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f", "oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf", "E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource", "oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table; etcdctl endpoint health", "+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ 192.168.111.26 is healthy: committed proposal: took = 10.458132ms 192.168.111.25 is healthy: committed proposal: took = 11.047349ms 192.168.111.28 is healthy: committed proposal: took = 11.414402ms", "oc get nodes", "NAME STATUS ROLES AGE VERSION worker-1 Ready worker 20h v1.24.0+3882f8f master-2 NotReady master 20h v1.24.0+3882f8f master-3 Ready master 20h v1.24.0+3882f8f worker-4 Ready worker 20h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f", "oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf", "E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, worker-2 is unhealthy", "oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table", "+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+", "etcdctl endpoint health", "{\"level\":\"warn\",\"ts\":\"2022-09-27T08:25:35.953Z\",\"logger\":\"client\",\"caller\":\"v3/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000680380/192.168.111.25\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\\\"\"} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster", "oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-2", "oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf -f", "I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine test-day2-1-6qv96-master-2 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.26}]", "oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table", "+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+", "etcdctl endpoint health", "{\"level\":\"warn\",\"ts\":\"2022-09-27T10:31:07.227Z\",\"logger\":\"client\",\"caller\":\"v3/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc0000d6e00/192.168.111.25\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\\\"\"} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster", "etcdctl member remove 61e2a86084aafa62", "Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7", "etcdctl member list -w table", "+----------+---------+--------+--------------+--------------+-------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +----------+---------+--------+--------------+--------------+-------+ | 2c18942f | started |worker-3|192.168.111.26|192.168.111.26| false | | ead4f280 | started |worker-5|192.168.111.28|192.168.111.28| false | +----------+---------+--------+--------------+--------------+-------+", "oc get csr | grep Pending", "csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 17h v1.24.0+3882f8f master-6 Ready master 2m52s v1.24.0+3882f8f", "oc create bmh -n openshift-machine-api custom-master3", "oc create machine -n openshift-machine-api custom-master3", "#!/bin/bash Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. This script will link Machine object and Node object. This is needed in order to have IP address of the Node present in the status of the Machine. set -x set -e machine=\"USD1\" node=\"USD2\" if [ -z \"USDmachine\" -o -z \"USDnode\" ]; then echo \"Usage: USD0 MACHINE NODE\" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH=\"http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts\" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name=\"USD1\" url=\"USD2\" timeout=\"USD3\" shift 3 curl_opts=\"USD@\" echo -n \"Waiting for USDname to respond\" start_time=USD(date +%s) until curl -g -X GET \"USDurl\" \"USD{curl_opts[@]}\" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n \".\" curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo \"\\nTimed out waiting for USDname\" return 1 fi sleep 5 done echo \" Success!\" return 0 } wait_for_json oc_proxy \"USD{HOST_PROXY_API_PATH}\" 10 -H \"Accept: application/json\" -H \"Content-Type: application/json\" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo \"USDmachine_data\" | jq '.metadata.annotations[\"metal3.io/BareMetalHost\"]' | cut -f2 -d/ | sed 's/\"//g') if [ -z \"USDhost\" ]; then echo \"Machine USDmachine is not linked to a host yet.\" 1>&2 exit 1 fi The address structure on the host doesn't match the node, so extract the values we want into separate variables so we can build the patch we need. hostname=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"Hostname\") | .address' | sed 's/\"//g') ipaddr=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"InternalIP\") | .address' | sed 's/\"//g') host_patch=' { \"status\": { \"hardware\": { \"hostname\": \"'USD{hostname}'\", \"nics\": [ { \"ip\": \"'USD{ipaddr}'\", \"mac\": \"00:00:00:00:00:00\", \"model\": \"unknown\", \"speedGbps\": 10, \"vlanId\": 0, \"pxe\": true, \"name\": \"eth1\" } ], \"systemVendor\": { \"manufacturer\": \"Red Hat\", \"productName\": \"product name\", \"serialNumber\": \"\" }, \"firmware\": { \"bios\": { \"date\": \"04/01/2014\", \"vendor\": \"SeaBIOS\", \"version\": \"1.11.0-2.el7\" } }, \"ramMebibytes\": 0, \"storage\": [], \"cpu\": { \"arch\": \"x86_64\", \"model\": \"Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz\", \"clockMegahertz\": 2199.998, \"count\": 4, \"flags\": [] } } } } ' echo \"PATCHING HOST\" echo \"USD{host_patch}\" | jq . curl -s -X PATCH USD{HOST_PROXY_API_PATH}/USD{host}/status -H \"Content-type: application/merge-patch+json\" -d \"USD{host_patch}\" get baremetalhost -n openshift-machine-api -o yaml \"USD{host}\"", "bash link-machine-and-node.sh custom-master3 worker-3", "oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table", "+---------+-------+--------+--------------+--------------+-------+ | ID | STATUS| NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +---------+-------+--------+--------------+--------------+-------+ | 2c18942f|started|worker-3|192.168.111.26|192.168.111.26| false | | ead4f280|started|worker-5|192.168.111.28|192.168.111.28| false | | 79153c5a|started|worker-6|192.168.111.29|192.168.111.29| false | +---------+-------+--------+--------------+--------------+-------+", "oc get clusteroperator etcd", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22h", "oc rsh -n openshift-etcd etcd-worker-3 etcdctl endpoint health", "192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577ms", "oc get Nodes", "NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 18h v1.24.0+3882f8f master-6 Ready master 40m v1.24.0+3882f8f", "oc get ClusterOperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.5 True False False 150m baremetal 4.11.5 True False False 22h cloud-controller-manager 4.11.5 True False False 22h cloud-credential 4.11.5 True False False 22h cluster-autoscaler 4.11.5 True False False 22h config-operator 4.11.5 True False False 22h console 4.11.5 True False False 145m csi-snapshot-controller 4.11.5 True False False 22h dns 4.11.5 True False False 22h etcd 4.11.5 True False False 22h image-registry 4.11.5 True False False 22h ingress 4.11.5 True False False 22h insights 4.11.5 True False False 22h kube-apiserver 4.11.5 True False False 22h kube-controller-manager 4.11.5 True False False 22h kube-scheduler 4.11.5 True False False 22h kube-storage-version-migrator 4.11.5 True False False 148m machine-api 4.11.5 True False False 22h machine-approver 4.11.5 True False False 22h machine-config 4.11.5 True False False 110m marketplace 4.11.5 True False False 22h monitoring 4.11.5 True False False 22h network 4.11.5 True False False 22h node-tuning 4.11.5 True False False 22h openshift-apiserver 4.11.5 True False False 163m openshift-controller-manager 4.11.5 True False False 22h openshift-samples 4.11.5 True False False 22h operator-lifecycle-manager 4.11.5 True False False 22h operator-lifecycle-manager-catalog 4.11.5 True False False 22h operator-lifecycle-manager-pkgsvr 4.11.5 True False False 22h service-ca 4.11.5 True False False 22h storage 4.11.5 True False False 22h", "oc get ClusterVersion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5", "touch ~/nutanix-cluster-env.sh", "chmod +x ~/nutanix-cluster-env.sh", "source ~/nutanix-cluster-env.sh", "cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_NAME=<cluster_name> EOF", "cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_NAME=<subnet_name> EOF", "source refresh-token", "curl -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/downloads/image-url", "cat << EOF > create-image.json { \"spec\": { \"name\": \"ocp_ai_discovery_image.iso\", \"description\": \"ocp_ai_discovery_image.iso\", \"resources\": { \"architecture\": \"X86_64\", \"image_type\": \"ISO_IMAGE\", \"source_uri\": \"<image_url>\", \"source_options\": { \"allow_insecure_connection\": true } } }, \"metadata\": { \"spec_version\": 3, \"kind\": \"image\" } } EOF", "curl -k -u <user>:'<password>' -X 'POST' 'https://<domain-or-ip>:<port>/api/nutanix/v3/images -H 'accept: application/json' -H 'Content-Type: application/json' -d @./create-image.json | jq '.metadata.uuid'", "cat << EOF >> ~/nutanix-cluster-env.sh export NTX_IMAGE_UUID=<uuid> EOF", "curl -k -u <user>:'<password>' -X 'POST' 'https://<domain-or-ip>:<port>/api/nutanix/v3/clusters/list' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{ \"kind\": \"cluster\" }' | jq '.entities[] | select(.spec.name==\"<nutanix_cluster_name>\") | .metadata.uuid'", "cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_UUID=<uuid> EOF", "curl -k -u <user>:'<password>' -X 'POST' 'https://<domain-or-ip>:<port>/api/nutanix/v3/subnets/list' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{ \"kind\": \"subnet\", \"filter\": \"name==<subnet_name>\" }' | jq '.entities[].metadata.uuid'", "cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_UUID=<uuid> EOF", "source ~/nutanix-cluster-env.sh", "touch create-master-0.json", "cat << EOF > create-master-0.json { \"spec\": { \"name\": \"<host_name>\", \"resources\": { \"power_state\": \"ON\", \"num_vcpus_per_socket\": 1, \"num_sockets\": 16, \"memory_size_mib\": 32768, \"disk_list\": [ { \"disk_size_mib\": 122880, \"device_properties\": { \"device_type\": \"DISK\" } }, { \"device_properties\": { \"device_type\": \"CDROM\" }, \"data_source_reference\": { \"kind\": \"image\", \"uuid\": \"USDNTX_IMAGE_UUID\" } } ], \"nic_list\": [ { \"nic_type\": \"NORMAL_NIC\", \"is_connected\": true, \"ip_endpoint_list\": [ { \"ip_type\": \"DHCP\" } ], \"subnet_reference\": { \"kind\": \"subnet\", \"name\": \"USDNTX_SUBNET_NAME\", \"uuid\": \"USDNTX_SUBNET_UUID\" } } ], \"guest_tools\": { \"nutanix_guest_tools\": { \"state\": \"ENABLED\", \"iso_mount_state\": \"MOUNTED\" } } }, \"cluster_reference\": { \"kind\": \"cluster\", \"name\": \"USDNTX_CLUSTER_NAME\", \"uuid\": \"USDNTX_CLUSTER_UUID\" } }, \"api_version\": \"3.1.0\", \"metadata\": { \"kind\": \"vm\" } } EOF", "curl -k -u <user>:'<password>' -X 'POST' 'https://<domain-or-ip>:<port>/api/nutanix/v3/vms' -H 'accept: application/json' -H 'Content-Type: application/json' -d @./<vm_config_file_name> | jq '.metadata.uuid'", "cat << EOF >> ~/nutanix-cluster-env.sh export NTX_MASTER_0_UUID=<uuid> EOF", "curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.enabled_host_count'", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"platform_type\":\"nutanix\" } ' | jq", "oc patch infrastructure/cluster --type=merge --patch-file=/dev/stdin <<-EOF { \"spec\": { \"platformSpec\": { \"nutanix\": { \"prismCentral\": { \"address\": \"<prismcentral_address>\", \"port\": <prismcentral_port> }, \"prismElements\": [ { \"endpoint\": { \"address\": \"<prismelement_address>\", \"port\": <prismelement_port> }, \"name\": \"<prismelement_clustername>\" } ] }, \"type\": \"Nutanix\" } } } EOF", "infrastructure.config.openshift.io/cluster patched", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: nutanix-credentials namespace: openshift-machine-api type: Opaque stringData: credentials: | [{\"type\":\"basic_auth\",\"data\":{\"prismCentral\":{\"username\":\"USD{<prismcentral_username>}\",\"password\":\"USD{<prismcentral_password>}\"},\"prismElements\":null}}] EOF", "secret/nutanix-credentials created", "oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config-backup.yaml", "cp cloud-provider-config_backup.yaml cloud-provider-config.yaml", "vi cloud-provider-config.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: cloud-provider-config namespace: openshift-config data: config: | { \"prismCentral\": { \"address\": \"<prismcentral_address>\", \"port\":<prismcentral_port>, \"credentialRef\": { \"kind\": \"Secret\", \"name\": \"nutanix-credentials\", \"namespace\": \"openshift-cloud-controller-manager\" } }, \"topologyDiscovery\": { \"type\": \"Prism\", \"topologyCategories\": null }, \"enableCustomLabeling\": true }", "oc apply -f cloud-provider-config.yaml", "Warning: resource configmaps/cloud-provider-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. configmap/cloud-provider-config configured", "vi openshift-cluster-csi-drivers-operator-group.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-cluster-csi-drivers namespace: openshift-cluster-csi-drivers spec: targetNamespaces: - openshift-cluster-csi-drivers upgradeStrategy: Default", "oc create -f openshift-cluster-csi-drivers-operator-group.yaml", "operatorgroup.operators.coreos.com/openshift-cluster-csi-driversjw9cd created", "oc get packagemanifests | grep nutanix", "nutanixcsioperator Certified Operators 129m", "DEFAULT_CHANNEL=USD(oc get packagemanifests nutanixcsioperator -o jsonpath={.status.defaultChannel})", "STARTING_CSV=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\\{.status.channels[*].currentCSV\\})", "CATALOG_SOURCE=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\\{.status.catalogSource\\})", "SOURCE_NAMESPACE=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\\{.status.catalogSourceNamespace\\})", "cat << EOF > nutanixcsioperator.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nutanixcsioperator namespace: openshift-cluster-csi-drivers spec: channel: USDDEFAULT_CHANNEL installPlanApproval: Automatic name: nutanixcsioperator source: USDCATALOG_SOURCE sourceNamespace: USDSOURCE_NAMESPACE startingCSV: USDSTARTING_CSV EOF", "oc apply -f nutanixcsioperator.yaml", "subscription.operators.coreos.com/nutanixcsioperator created", "oc get subscription nutanixcsioperator -n openshift-cluster-csi-drivers -o 'jsonpath={..status.state}'", "cat <<EOF | oc create -f - apiVersion: crd.nutanix.com/v1alpha1 kind: NutanixCsiStorage metadata: name: nutanixcsistorage namespace: openshift-cluster-csi-drivers spec: {} EOF", "snutanixcsistorage.crd.nutanix.com/nutanixcsistorage created", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: ntnx-secret namespace: openshift-cluster-csi-drivers stringData: # prism-element-ip:prism-port:admin:password key: <prismelement_address:prismelement_port:prismcentral_username:prismcentral_password> 1 EOF", "secret/nutanix-secret created", "cat <<EOF | oc create -f - kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nutanix-volume annotations: storageclass.kubernetes.io/is-default-class: 'true' provisioner: csi.nutanix.com parameters: csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers csi.storage.k8s.io/provisioner-secret-name: ntnx-secret storageContainer: <nutanix_storage_container> 1 csi.storage.k8s.io/controller-expand-secret-name: ntnx-secret csi.storage.k8s.io/node-publish-secret-namespace: openshift-cluster-csi-drivers storageType: NutanixVolumes csi.storage.k8s.io/node-publish-secret-name: ntnx-secret csi.storage.k8s.io/controller-expand-secret-namespace: openshift-cluster-csi-drivers reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate EOF", "storageclass.storage.k8s.io/nutanix-volume created", "cat <<EOF | oc create -f - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nutanix-volume-pvc namespace: openshift-cluster-csi-drivers annotations: volume.beta.kubernetes.io/storage-provisioner: csi.nutanix.com volume.kubernetes.io/storage-provisioner: csi.nutanix.com finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nutanix-volume volumeMode: Filesystem EOF", "persistentvolumeclaim/nutanix-volume-pvc created", "oc get pvc -n openshift-cluster-csi-drivers", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nutanix-volume-pvc Bound nutanix-volume 52s", "wget - O vsphere-discovery-image.iso <discovery_url>", "for VM in USD(/usr/local/bin/govc ls /<datacenter>/vm/<folder_name>) do /usr/local/bin/govc vm.power -off USDVM /usr/local/bin/govc vm.destroy USDVM done", "govc datastore.rm -ds <iso_datastore> <image>", "govc datastore.upload -ds <iso_datastore> vsphere-discovery-image.iso", "govc vm.create -net.adapter <network_adapter_type> -disk.controller <disk_controller_type> -pool=<resource_pool> -c=16 -m=32768 -disk=120GB -disk-datastore=<datastore_file> -net.address=\"<nic_mac_address>\" -iso-datastore=<iso_datastore> -iso=\"vsphere-discovery-image.iso\" -folder=\"<inventory_folder>\" <hostname>.<cluster_name>.example.com", "govc vm.create -net.adapter <network_adapter_type> -disk.controller <disk_controller_type> -pool=<resource_pool> -c=4 -m=8192 -disk=120GB -disk-datastore=<datastore_file> -net.address=\"<nic_mac_address>\" -iso-datastore=<iso_datastore> -iso=\"vsphere-discovery-image.iso\" -folder=\"<inventory_folder>\" <hostname>.<cluster_name>.example.com", "govc ls /<datacenter>/vm/<folder_name>", "for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -s=true USDVM done", "for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.change -vm USDVM -e disk.enableUUID=TRUE done", "for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -on=true USDVM done", "echo -n \"<vcenter_username>\" | base64 -w0", "echo -n \"<vcenter_password>\" | base64 -w0", "oc get secret vsphere-creds -o yaml -n kube-system > creds_backup.yaml", "cp creds_backup.yaml vsphere-creds.yaml", "vi vsphere-creds.yaml", "apiVersion: v1 data: <vcenter_address>.username: <vcenter_username_encoded> <vcenter_address>.password: <vcenter_password_encoded> kind: Secret metadata: annotations: cloudcredential.openshift.io/mode: passthrough creationTimestamp: \"2022-01-25T17:39:50Z\" name: vsphere-creds namespace: kube-system resourceVersion: \"2437\" uid: 06971978-e3a5-4741-87f9-2ca3602f2658 type: Opaque", "oc replace -f vsphere-creds.yaml", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config_backup.yaml", "cloud-provider-config_backup.yaml cloud-provider-config.yaml", "vi cloud-provider-config.yaml", "apiVersion: v1 data: config: | [Global] secret-name = \"vsphere-creds\" secret-namespace = \"kube-system\" insecure-flag = \"1\" [Workspace] server = \"<vcenter_address>\" datacenter = \"<datacenter>\" default-datastore = \"<datastore>\" folder = \"/<datacenter>/vm/<folder>\" [VirtualCenter \"<vcenter_address>\"] datacenters = \"<datacenter>\" kind: ConfigMap metadata: creationTimestamp: \"2022-01-25T17:40:49Z\" name: cloud-provider-config namespace: openshift-config resourceVersion: \"2070\" uid: 80bb8618-bf25-442b-b023-b31311918507", "oc apply -f cloud-provider-config.yaml", "oc get nodes", "oc adm taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule", "oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready control-plane,master 45h v1.26.3+379cd9f master-1 Ready control-plane,master 45h v1.26.3+379cd9f worker-0 Ready worker 45h v1.26.3+379cd9f worker-1 Ready worker 45h v1.26.3+379cd9f master-2 Ready control-plane,master 45h v1.26.3+379cd9f oc adm taint node master-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node master-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node master-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node worker-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node worker-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule", "oc get infrastructures.config.openshift.io -o yaml > infrastructures.config.openshift.io.yaml.backup", "cp infrastructures.config.openshift.io.yaml.backup infrastructures.config.openshift.io.yaml", "vi infrastructures.config.openshift.io.yaml", "apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: \"2022-05-07T10:19:55Z\" generation: 1 name: cluster resourceVersion: \"536\" uid: e8a5742c-6d15-44e6-8a9e-064b26ab347d spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: assisted-generated-failure-domain region: assisted-generated-region server: <vcenter_address> topology: computeCluster: /<data_center>/host/<vcenter_cluster> datacenter: <data_center> datastore: /<data_center>/datastore/<datastore> folder: \"/<data_center>/path/to/folder\" networks: - \"VM Network\" resourcePool: /<data_center>/host/<vcenter_cluster>/Resources zone: assisted-generated-zone nodeNetworking: external: {} internal: {} vcenters: - datacenters: - <data_center> server: <vcenter_address> kind: List metadata: resourceVersion: \"\"", "oc apply -f infrastructures.config.openshift.io.yaml --overwrite=true", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem", "ssh core@<host_ip_address>", "ssh -i <ssh_private_key_file> core@<host_ip_address>", "sudo journalctl -u agent.service", "sudo journalctl TAG=agent" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/assisted_installer_for_openshift_container_platform/index
Chapter 9. Setting up a trust
Chapter 9. Setting up a trust You can configure the Identity Management (IdM)/Active Directory (AD) trust on the IdM side using the command line. Prerequisites DNS is correctly configured. Both IdM and AD servers must be able to resolve each other names. For details, see Configuring DNS and realm settings for a trust . Supported versions of AD and IdM are deployed. For details, see Supported versions of Windows Server . You have obtained a Kerberos ticket. For details, see Using kinit to log in to IdM manually . 9.1. Preparing the IdM server for the trust Before you can establish a trust with AD, you must prepare the IdM domain using the ipa-adtrust-install utility on an IdM server. Note Any system where you run the ipa-adtrust-install command automatically becomes an AD trust controller. However, you must run ipa-adtrust-install only once on an IdM server. Prerequisites IdM server is installed. You have root privileges to install packages and restart IdM services. Procedure Install the required packages: Authenticate as the IdM administrative user: Run the ipa-adtrust-install utility: The DNS service records are created automatically if IdM was installed with an integrated DNS server. If you installed IdM without an integrated DNS server, ipa-adtrust-install prints a list of service records that you must manually add to DNS before you can continue. The script prompts you that the /etc/samba/smb.conf already exists and will be rewritten: The script prompts you to configure the slapi-nis plug-in, a compatibility plug-in that allows older Linux clients to work with trusted users: You are prompted to run the SID generation task to create a SID for any existing users: This is a resource-intensive task, so if you have a high number of users, you can run this at another time. Optional: By default, the Dynamic RPC port range is defined as 49152-65535 for Windows Server 2008 and later. If you need to define a different Dynamic RPC port range for your environment, configure Samba to use different ports and open those ports in your firewall settings. The following example sets the port range to 55000-65000 . Make sure that DNS is properly configured, as described in Verifying the DNS configuration for a trust . Important Red Hat strongly recommends you verify the DNS configuration as described in Verifying the DNS configuration for a trust every time after running ipa-adtrust-install , especially if IdM or AD do not use integrated DNS servers. Restart the ipa service: Use the smbclient utility to verify that Samba responds to Kerberos authentication from the IdM side: 9.2. Setting up a trust agreement using the command line You can set up the trust agreement using the command line. The Identity Management (IdM) server allows you to configure three types of trust agreements: One-way trust - default option. One-way trust enables Active Directory (AD) users and groups to access resources in IdM, but not the other way around. The IdM domain trusts the AD forest, but the AD forest does not trust the IdM domain. Two-way trust - Two-way trust enables AD users and groups to access resources in IdM. You must configure a two-way trust for solutions such as Microsoft SQL Server that expect the S4U2Self and S4U2Proxy Microsoft extensions to the Kerberos protocol to work over a trust boundary. An application on a RHEL IdM host might request S4U2Self or S4U2Proxy information from an Active Directory domain controller about an AD user, and a two-way trust provides this feature. Note that this two-way trust functionality does not allow IdM users to login to Windows systems, and the two-way trust in IdM does not give the users any additional rights compared to the one-way trust solution in AD. To create the two-way trust, add the following option to the command: --two-way=true External trust - a trust relationship between IdM and an AD domain in different forests. While a forest trust always requires establishing a trust between IdM and the root domain of an Active Directory forest, an external trust can be established from IdM to a domain within a forest. This is only recommended if it is not possible to establish a forest trust between forest root domains due to administrative or organizational reasons. To create the external trust, add the following option to the command: --external=true The steps below show you how to create a one-way trust agreement. Prerequisites User name and password of a Windows administrator. You have prepared the IdM server for the trust . Procedure Create a trust agreement for the AD domain and the IdM domain by using the ipa trust-add command: To have SSSD automatically generate UIDs and GIDs for AD users based on their SID, create a trust agreement with the Active Directory domain ID range type. This is the most common configuration. If you have configured POSIX attributes for your users in Active Directory (such as uidNumber and gidNumber ) and you want SSSD to process this information, create a trust agreement with the Active Directory domain with POSIX attributes ID range type: Warning If you do not specify an ID Range type when creating a trust, IdM attempts to automatically select the appropriate range type by requesting details from AD domain controllers in the forest root domain. If IdM does not detect any POSIX attributes, the trust installation script selects the Active Directory domain ID range. If IdM detects any POSIX attributes in the forest root domain, the trust installation script selects the Active Directory domain with POSIX attributes ID range and assumes that UIDs and GIDs are correctly defined in AD. If POSIX attributes are not correctly set in AD, you will not be able to resolve AD users. For example, if the users and groups that need access to IdM systems are not part of the forest root domain, but instead are located in a child domain of the forest domain, the installation script might not detect the POSIX attributes defined in the child AD domain. In this case, explicitly choose the POSIX ID range type when establishing the trust. 9.3. Setting up a trust agreement in the IdM Web UI You can configure the Identity Management (IdM)/Active Directory (AD) trust agreement on the IdM side using the IdM Web UI. Prerequisites DNS is correctly configured. Both IdM and AD servers must be able to resolve each other names. Supported versions of AD and IdM are deployed. You have obtained a Kerberos ticket. Before creating a trust in the Web UI, prepare the IdM server for the trust as described in: Preparing the IdM server for the trust . You are logged in as an IdM administrator. For details, see Accessing the IdM Web UI in a web browser . Procedure In the IdM Web UI, click the IPA Server tab. In the IPA Server tab, click the Trusts tab. In the drop down menu, select the Trusts option. Click the Add button. In the Add Trust dialog box, enter the name of the Active Directory domain. In the Account and Password fields, add the administrator credentials of the Active Directory administrator. Optional: Select Two-way trust , if you want to enable AD users and groups to access resources in IdM. However, the two-way trust in IdM does not give the users any additional rights compared to the one-way trust solution in AD. Both solutions are considered equally secure because of default cross-forest trust SID filtering settings. Optional: Select External trust if you are configuring a trust with an AD domain that is not the root domain of an AD forest. While a forest trust always requires establishing a trust between IdM and the root domain of an Active Directory forest, you can establish an external trust from IdM to any domain within an AD forest. Optional: By default, the trust installation script tries to detect the appropriate ID range type. You can also explicitly set the ID range type by choosing one of the following options: To have SSSD automatically generate UIDs and GIDs for AD users based on their SID, select the Active Directory domain ID range type. This is the most common configuration. If you have configured POSIX attributes for your users in Active Directory (such as uidNumber and gidNumber ) and you want SSSD to process this information, select the Active Directory domain with POSIX attributes ID range type. Warning If you leave the Range type setting on the default Detect option, IdM attempts to automatically select the appropriate range type by requesting details from AD domain controllers in the forest root domain. If IdM does not detect any POSIX attributes, the trust installation script selects the Active Directory domain ID range. If IdM detects any POSIX attributes in the forest root domain, the trust installation script selects the Active Directory domain with POSIX attributes ID range and assumes that UIDs and GIDs are correctly defined in AD. If POSIX attributes are not correctly set in AD, you will not be able to resolve AD users. For example, if the users and groups that need access to IdM systems are not part of the forest root domain, but instead are located in a child domain of the forest domain, the installation script might not detect the POSIX attributes defined in the child AD domain. In this case, explicitly choose the POSIX ID range type when establishing the trust. Click Add . Verification If the trust has been successfully added to the IdM server, you can see the green pop-up window in the IdM Web UI. It means that the: Domain name exists User name and password of the Windows Server has been added correctly. Now you can continue to test the trust connection and Kerberos authentication. 9.4. Setting up a trust agreement using Ansible You can set up a one-way trust agreement between Identity Management (IdM) and Active Directory (AD) by using an Ansible playbook. You can configure three types of trust agreements: One-way trust - default option. One-way trust enables Active Directory (AD) users and groups to access resources in IdM, but not the other way around. The IdM domain trusts the AD forest, but the AD forest does not trust the IdM domain. Two-way trust - Two-way trust enables AD users and groups to access resources in IdM. You must configure a two-way trust for solutions such as Microsoft SQL Server that expect the S4U2Self and S4U2Proxy Microsoft extensions to the Kerberos protocol to work over a trust boundary. An application on a RHEL IdM host might request S4U2Self or S4U2Proxy information from an Active Directory domain controller about an AD user, and a two-way trust provides this feature. Note that this two-way trust functionality does not allow IdM users to login to Windows systems, and the two-way trust in IdM does not give the users any additional rights compared to the one-way trust solution in AD. To create the two-way trust, add the following variable to the playbook task below: two_way: true External trust - a trust relationship between IdM and an AD domain in different forests. While a forest trust always requires establishing a trust between IdM and the root domain of an Active Directory forest, an external trust can be established from IdM to a domain within a forest. This is only recommended if it is not possible to establish a forest trust between forest root domains due to administrative or organizational reasons. To create the external trust, add the following variable to the playbook task below: external: true Prerequisites User name and password of a Windows administrator. The IdM admin password. You have prepared the IdM server for the trust . You are using the 4.8.7 version of IdM or later. To view the version of IdM you have installed on your server, run ipa --version . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Select one of the following scenarios based on your use case: To create an ID mapping trust agreement, in which SSSD automatically generates UIDs and GIDs for AD users and groups based on their SIDs, create an add-trust.yml playbook with the following content: In the example: realm defines the AD realm name string. admin defines the AD domain administrator string. password defines the AD domain administrator's password string. To create a POSIX trust agreement, in which SSSD processes POSIX attributes stored in AD, such as uidNumber and gidNumber , create an add-trust.yml playbook with the following content: To create a trust agreement in which IdM attempts to automatically select the appropriate range type, ipa-ad-trust or ipa-ad-trust-posix , by requesting details from AD domain controllers in the forest root domain, create an add-trust.yml playbook with the following content: Warning If you do not specify an ID range type when creating a trust, and if IdM does not detect any POSIX attributes in the AD forest root domain, the trust installation script selects the Active Directory domain ID range. If IdM detects any POSIX attributes in the forest root domain, the trust installation script selects the Active Directory domain with POSIX attributes ID range and assumes that UIDs and GIDs are correctly defined in AD. However, if POSIX attributes are not correctly set in AD, you will not be able to resolve AD users. For example, if the users and groups that need access to IdM systems are not part of the forest root domain, but instead are located in a child domain of the forest domain, the installation script might not detect the POSIX attributes defined in the child AD domain. In this case, explicitly choose the POSIX ID range type when establishing the trust. Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources /usr/share/doc/ansible-freeipa/README-trust.md /usr/share/doc/ansible-freeipa/playbooks/trust 9.5. Verifying the Kerberos configuration To verify the Kerberos configuration, test if it is possible to obtain a ticket for an Identity Management (IdM) user and if the IdM user can request service tickets. Procedure Request a ticket for an Active Directory (AD) user: Request service tickets for a service within the IdM domain: If the AD service ticket is successfully granted, there is a cross-realm ticket-granting ticket (TGT) listed with all of the other requested tickets. The TGT is named krbtgt/[email protected]. The localauth plug-in maps Kerberos principals to local System Security Services Daemon (SSSD) user names. This allows AD users to use Kerberos authentication and access Linux services, which support GSSAPI authentication directly. 9.6. Verifying the trust configuration on IdM Before configuring trust, verify that the Identity Management (IdM) and Active Directory (AD) servers can resolve themselves and each other. Prerequisites You need to be logged in with administrator privileges. Procedure Run a DNS query for the MS DC Kerberos over UDP and LDAP over TCP service records. These commands list all IdM servers on which ipa-adtrust-install has been executed. The output is empty if ipa-adtrust-install has not been executed on any IdM server, which is typically before establishing the first trust relationship. Run a DNS query for the Kerberos and LDAP over TCP service records to verify that IdM is able to resolve service records for AD: 9.7. Verifying the trust configuration on AD After configuring the trust, verify that: The Identity Management (IdM)-hosted services are resolvable from the Active Directory (AD) server. AD services are resolvable from the AD server. Prerequisites You need to be logged in with administrator privileges. Procedure On the AD server, set the nslookup.exe utility to look up service records. Enter the domain name for the Kerberos over UDP and LDAP over TCP service records. Change the service type to TXT and run a DNS query for the TXT record with the IdM Kerberos realm name. Run a DNS query for the MS DC Kerberos over UDP and LDAP over TCP service records. Active Directory only expects to discover domain controllers that can respond to AD-specific protocol requests, such as other AD domain controllers and IdM trust controllers. Use the ipa-adtrust-install tool to promote an IdM server to a trust controller, and you can verify which servers are trust controllers with the ipa server-role-find --role 'AD trust controller' command. Verify that AD services are resolvable from the AD server. Enter the domain name for the Kerberos over UDP and LDAP over TCP service records. 9.8. Creating a trust agent A trust agent is an IdM server that can perform identity lookups against AD domain controllers. For example, if you are creating a replica of an IdM server that has a trust with Active Directory, you can set up the replica as a trust agent. A replica does not automatically have the AD trust agent role installed. Prerequisites IdM is installed with an Active Directory trust. The sssd-tools package is installed. Procedure On an existing trust controller, run the ipa-adtrust-install --add-agents command: The command starts an interactive configuration session and prompts you for the information required to set up the agent. Restart the IdM service on the trust agent. Remove all entries from the SSSD cache on the trust agent: Verify that the replica has the AD trust agent role installed:. Additional resources ipa-adtrust-install(1) man page on your system Trust controllers and trust agents 9.9. Enabling automatic private group mapping for a POSIX ID range on the CLI By default, SSSD does not map private groups for Active Directory (AD) users if you have established a POSIX trust that relies on POSIX data stored in AD. If any AD users do not have primary groups configured, IdM is not be able to resolve them. This procedure explains how to enable automatic private group mapping for an ID range by setting the hybrid option for the auto_private_groups SSSD parameter on the command line. As a result, IdM is able to resolve AD users that do not have primary groups configured in AD. Prerequisites You have successfully established a POSIX cross-forest trust between your IdM and AD environments. Procedure Display all ID ranges and make note of the AD ID range you want to modify. Adjust the automatic private group behavior for the AD ID range with the ipa idrange-mod command. Reset the SSSD cache to enable the new setting. Additional resources Options for automatically mapping private groups for AD users 9.10. Enabling automatic private group mapping for a POSIX ID range in the IdM WebUI By default, SSSD does not map private groups for Active Directory (AD) users if you have established a POSIX trust that relies on POSIX data stored in AD. If any AD users do not have primary groups configured, IdM is not be able to resolve them. This procedure explains how to enable automatic private group mapping for an ID range by setting the hybrid option for the auto_private_groups SSSD parameter in the Identity Management (IdM) WebUI. As a result, IdM is able to resolve AD users that do not have primary groups configured in AD. Prerequisites You have successfully established a POSIX cross-forest trust between your IdM and AD environments. Procedure Log into the IdM Web UI with your user name and password. Open the IPA Server ID Ranges tab. Select the ID range you want to modify, such as AD.EXAMPLE.COM_id_range . From the Auto private groups drop down menu, select the hybrid option. Click the Save button to save your changes. Additional resources Options for automatically mapping private groups for AD users
[ "dnf install ipa-server-trust-ad samba-client", "kinit admin", "ipa-adtrust-install", "WARNING: The smb.conf already exists. Running ipa-adtrust-install will break your existing Samba configuration. Do you wish to continue? [no]: yes", "Do you want to enable support for trusted domains in Schema Compatibility plugin? This will allow clients older than SSSD 1.9 and non-Linux clients to work with trusted users. Enable trusted domains support in slapi-nis? [no]: yes", "Do you want to run the ipa-sidgen task? [no]: yes", "net conf setparm global 'rpc server dynamic port range' 55000-65000 firewall-cmd --add-port=55000-65000/tcp firewall-cmd --runtime-to-permanent", "ipactl restart", "smbclient -L ipaserver.idm.example.com -U user_name --use-kerberos=required lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- IPCUSD IPC IPC Service (Samba 4.15.2)", "ipa trust-add --type=ad ad.example.com --admin <ad_admin_username> --password --range-type=ipa-ad-trust", "ipa trust-add --type=ad ad.example.com --admin <ad_admin_username> --password --range-type=ipa-ad-trust-posix", "cd ~/ MyPlaybooks /", "--- - name: Playbook to create a trust hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: ensure the trust is present ipatrust: ipaadmin_password: \"{{ ipaadmin_password }}\" realm: ad.example.com admin: Administrator password: secret_password state: present", "--- - name: Playbook to create a trust hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: ensure the trust is present ipatrust: ipaadmin_password: \"{{ ipaadmin_password }}\" realm: ad.example.com admin: Administrator password: secret_password range_type: ipa-ad-trust-posix state: present", "--- - name: Playbook to create a trust hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: ensure the trust is present ipatrust: ipaadmin_password: \"{{ ipaadmin_password }}\" realm: ad.example.com admin: Administrator password: secret_password state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory add-trust.yml", "kinit [email protected]", "kvno -S host server.idm.example.com", "klist Ticket cache: KEYRING:persistent:0:krb_ccache_hRtox00 Default principal: [email protected] Valid starting Expires Service principal 03.05.2016 18:31:06 04.05.2016 04:31:01 host/[email protected] renew until 04.05.2016 18:31:00 03.05.2016 18:31:06 04.05.2016 04:31:01 krbtgt/[email protected] renew until 04.05.2016 18:31:00 03.05.2016 18:31:01 04.05.2016 04:31:01 krbtgt/[email protected] renew until 04.05.2016 18:31:00", "dig +short -t SRV _kerberos._udp.dc._msdcs.idm.example.com. 0 100 88 server.idm.example.com. dig +short -t SRV _ldap._tcp.dc._msdcs.idm.example.com. 0 100 389 server.idm.example.com.", "dig +short -t SRV _kerberos._tcp.dc._msdcs.ad.example.com. 0 100 88 addc1.ad.example.com. dig +short -t SRV _ldap._tcp.dc._msdcs.ad.example.com. 0 100 389 addc1.ad.example.com.", "C:\\>nslookup.exe > set type=SRV", "> _kerberos._udp.idm.example.com. _kerberos._udp.idm.example.com. SRV service location: priority = 0 weight = 100 port = 88 svr hostname = server.idm.example.com > _ldap._tcp.idm.example.com _ldap._tcp.idm.example.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = server.idm.example.com", "C:\\>nslookup.exe > set type=TXT > _kerberos.idm.example.com. _kerberos.idm.example.com. text = \"IDM.EXAMPLE.COM\"", "C:\\>nslookup.exe > set type=SRV > _kerberos._udp.dc._msdcs.idm.example.com. _kerberos._udp.dc._msdcs.idm.example.com. SRV service location: priority = 0 weight = 100 port = 88 svr hostname = server.idm.example.com > _ldap._tcp.dc._msdcs.idm.example.com. _ldap._tcp.dc._msdcs.idm.example.com. SRV service location: priority = 0 weight = 100 port = 389 svr hostname = server.idm.example.com", "C:\\>nslookup.exe > set type=SRV", "> _kerberos._udp.dc._msdcs.ad.example.com. _kerberos._udp.dc._msdcs.ad.example.com. SRV service location: priority = 0 weight = 100 port = 88 svr hostname = addc1.ad.example.com > _ldap._tcp.dc._msdcs.ad.example.com. _ldap._tcp.dc._msdcs.ad.example.com. SRV service location: priority = 0 weight = 100 port = 389 svr hostname = addc1.ad.example.com", "ipa-adtrust-install --add-agents", "ipactl restart", "sssctl cache-remove", "ipa server-show new_replica.idm.example.com Enabled server roles: CA server, NTP server, AD trust agent", "ipa idrange-find ---------------- 2 ranges matched ---------------- Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 882200000 Number of IDs in the range: 200000 Range type: local domain range Range name: AD.EXAMPLE.COM_id_range First Posix ID of the range: 1337000000 Number of IDs in the range: 200000 Domain SID of the trusted domain: S-1-5-21-4123312420-990666102-3578675309 Range type: Active Directory trust range with POSIX attributes ---------------------------- Number of entries returned 2 ----------------------------", "ipa idrange-mod --auto-private-groups=hybrid AD.EXAMPLE.COM_id_range", "sss_cache -E" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_trust_between_idm_and_ad/setting-up-a-trust_installing-trust-between-idm-and-ad
Chapter 100. Using Ansible to manage DNS records in IdM
Chapter 100. Using Ansible to manage DNS records in IdM This chapter describes how to manage DNS records in Identity Management (IdM) using an Ansible playbook. As an IdM administrator, you can add, modify, and delete DNS records in IdM. The chapter contains the following sections: Ensuring the presence of A and AAAA DNS records in IdM using Ansible Ensuring the presence of A and PTR DNS records in IdM using Ansible Ensuring the presence of multiple DNS records in IdM using Ansible Ensuring the presence of multiple CNAME records in IdM using Ansible Ensuring the presence of an SRV record in IdM using Ansible 100.1. DNS records in IdM Identity Management (IdM) supports many different DNS record types. The following four are used most frequently: A This is a basic map for a host name and an IPv4 address. The record name of an A record is a host name, such as www . The IP Address value of an A record is an IPv4 address, such as 192.0.2.1 . For more information about A records, see RFC 1035 . AAAA This is a basic map for a host name and an IPv6 address. The record name of an AAAA record is a host name, such as www . The IP Address value is an IPv6 address, such as 2001:DB8::1111 . For more information about AAAA records, see RFC 3596 . SRV Service (SRV) resource records map service names to the DNS name of the server that is providing that particular service. For example, this record type can map a service like an LDAP directory to the server which manages it. The record name of an SRV record has the format _service . _protocol , such as _ldap._tcp . The configuration options for SRV records include priority, weight, port number, and host name for the target service. For more information about SRV records, see RFC 2782 . PTR A pointer record (PTR) adds a reverse DNS record, which maps an IP address to a domain name. Note All reverse DNS lookups for IPv4 addresses use reverse entries that are defined in the in-addr.arpa. domain. The reverse address, in human-readable form, is the exact reverse of the regular IP address, with the in-addr.arpa. domain appended to it. For example, for the network address 192.0.2.0/24 , the reverse zone is 2.0.192.in-addr.arpa . The record name of a PTR must be in the standard format specified in RFC 1035 , extended in RFC 2317 , and RFC 3596 . The host name value must be a canonical host name of the host for which you want to create the record. Note Reverse zones can also be configured for IPv6 addresses, with zones in the .ip6.arpa. domain. For more information about IPv6 reverse zones, see RFC 3596 . When adding DNS resource records, note that many of the records require different data. For example, a CNAME record requires a host name, while an A record requires an IP address. In the IdM Web UI, the fields in the form for adding a new record are updated automatically to reflect what data is required for the currently selected type of record. 100.2. Common ipa dnsrecord-* options You can use the following options when adding, modifying and deleting the most common DNS resource record types in Identity Management (IdM): A (IPv4) AAAA (IPv6) SRV PTR In Bash , you can define multiple entries by listing the values in a comma-separated list inside curly braces, such as --option={val1,val2,val3} . Table 100.1. General Record Options Option Description --ttl = number Sets the time to live for the record. --structured Parses the raw DNS records and returns them in a structured format. Table 100.2. "A" record options Option Description Examples --a-rec = ARECORD Passes a single A record or a list of A records. ipa dnsrecord-add idm.example.com host1 --a-rec=192.168.122.123 Can create a wildcard A record with a given IP address. ipa dnsrecord-add idm.example.com "*" --a-rec=192.168.122.123 [a] --a-ip-address = string Gives the IP address for the record. When creating a record, the option to specify the A record value is --a-rec . However, when modifying an A record, the --a-rec option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. ipa dnsrecord-mod idm.example.com --a-rec 192.168.122.123 --a-ip-address 192.168.122.124 [a] The example creates a wildcard A record with the IP address of 192.0.2.123. Table 100.3. "AAAA" record options Option Description Example --aaaa-rec = AAAARECORD Passes a single AAAA (IPv6) record or a list of AAAA records. ipa dnsrecord-add idm.example.com www --aaaa-rec 2001:db8::1231:5675 --aaaa-ip-address = string Gives the IPv6 address for the record. When creating a record, the option to specify the A record value is --aaaa-rec . However, when modifying an A record, the --aaaa-rec option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. ipa dnsrecord-mod idm.example.com --aaaa-rec 2001:db8::1231:5675 --aaaa-ip-address 2001:db8::1231:5676 Table 100.4. "PTR" record options Option Description Example --ptr-rec = PTRRECORD Passes a single PTR record or a list of PTR records. When adding the reverse DNS record, the zone name used with the ipa dnsrecord-add command is reversed, compared to the usage for adding other DNS records. Typically, the host IP address is the last octet of the IP address in a given network. The first example on the right adds a PTR record for server4.idm.example.com with IPv4 address 192.168.122.4. The second example adds a reverse DNS entry to the 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. IPv6 reverse zone for the host server2.example.com with the IP address 2001:DB8::1111 . ipa dnsrecord-add 122.168.192.in-addr.arpa 4 --ptr-rec server4.idm.example.com. USD ipa dnsrecord-add 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. 1.1.1.0.0.0.0.0.0.0.0.0.0.0.0 --ptr-rec server2.idm.example.com. --ptr-hostname = string Gives the host name for the record. Table 100.5. "SRV" Record Options Option Description Example --srv-rec = SRVRECORD Passes a single SRV record or a list of SRV records. In the examples on the right, _ldap._tcp defines the service type and the connection protocol for the SRV record. The --srv-rec option defines the priority, weight, port, and target values. The weight values of 51 and 49 in the examples add up to 100 and represent the probability, in percentages, that a particular record is used. # ipa dnsrecord-add idm.example.com _ldap._tcp --srv-rec="0 51 389 server1.idm.example.com." # ipa dnsrecord-add server.idm.example.com _ldap._tcp --srv-rec="1 49 389 server2.idm.example.com." --srv-priority = number Sets the priority of the record. There can be multiple SRV records for a service type. The priority (0 - 65535) sets the rank of the record; the lower the number, the higher the priority. A service has to use the record with the highest priority first. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="1 49 389 server2.idm.example.com." --srv-priority=0 --srv-weight = number Sets the weight of the record. This helps determine the order of SRV records with the same priority. The set weights should add up to 100, representing the probability (in percentages) that a particular record is used. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="0 49 389 server2.idm.example.com." --srv-weight=60 --srv-port = number Gives the port for the service on the target host. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="0 60 389 server2.idm.example.com." --srv-port=636 --srv-target = string Gives the domain name of the target host. This can be a single period (.) if the service is not available in the domain. Additional resources Run ipa dnsrecord-add --help . 100.3. Ensuring the presence of A and AAAA DNS records in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure that A and AAAA records for a particular IdM host are present. In the example used in the procedure below, an IdM administrator ensures the presence of A and AAAA records for host1 in the idm.example.com DNS zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-A-and-AAAA-records-are-present.yml Ansible playbook file. For example: Open the ensure-A-and-AAAA-records-are-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. Set the zone_name variable to idm.example.com . In the records variable, set the name variable to host1 , and the a_ip_address variable to 192.168.122.123 . In the records variable, set the name variable to host1 , and the aaaa_ip_address variable to ::1 . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS records in IdM The README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory Sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory 100.4. Ensuring the presence of A and PTR DNS records in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure that an A record for a particular IdM host is present, with a corresponding PTR record. In the example used in the procedure below, an IdM administrator ensures the presence of A and PTR records for host1 with an IP address of 192.168.122.45 in the idm.example.com zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com DNS zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-dnsrecord-with-reverse-is-present.yml Ansible playbook file. For example: Open the ensure-dnsrecord-with-reverse-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable to host1 . Set the zone_name variable to idm.example.com . Set the ip_address variable to 192.168.122.45 . Set the create_reverse variable to true . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS records in IdM The README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory Sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory 100.5. Ensuring the presence of multiple DNS records in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure that multiple values are associated with a particular IdM DNS record. In the example used in the procedure below, an IdM administrator ensures the presence of multiple A records for host1 in the idm.example.com DNS zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-presence-multiple-records.yml Ansible playbook file. For example: Open the ensure-presence-multiple-records-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. In the records section, set the name variable to host1 . In the records section, set the zone_name variable to idm.example.com . In the records section, set the a_rec variable to 192.168.122.112 and to 192.168.122.122 . Define a second record in the records section: Set the name variable to host1 . Set the zone_name variable to idm.example.com . Set the aaaa_rec variable to ::1 . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS records in IdM The README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory Sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory 100.6. Ensuring the presence of multiple CNAME records in IdM using Ansible A Canonical Name record (CNAME record) is a type of resource record in the Domain Name System (DNS) that maps one domain name, an alias, to another name, the canonical name. You may find CNAME records useful when running multiple services from a single IP address: for example, an FTP service and a web service, each running on a different port. Follow this procedure to use an Ansible playbook to ensure that multiple CNAME records are present in IdM DNS. In the example used in the procedure below, host03 is both an HTTP server and an FTP server. The IdM administrator ensures the presence of the www and ftp CNAME records for the host03 A record in the idm.example.com zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . The host03 A record exists in the idm.example.com zone. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-CNAME-record-is-present.yml Ansible playbook file. For example: Open the ensure-CNAME-record-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Optional: Adapt the description provided by the name of the play. Set the ipaadmin_password variable to your IdM administrator password. Set the zone_name variable to idm.example.com . In the records variable section, set the following variables and values: Set the name variable to www . Set the cname_hostname variable to host03 . Set the name variable to ftp . Set the cname_hostname variable to host03 . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See the README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory. See sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory. 100.7. Ensuring the presence of an SRV record in IdM using Ansible A DNS service (SRV) record defines the hostname, port number, transport protocol, priority and weight of a service available in a domain. In Identity Management (IdM), you can use SRV records to locate IdM servers and replicas. Follow this procedure to use an Ansible playbook to ensure that an SRV record is present in IdM DNS. In the example used in the procedure below, an IdM administrator ensures the presence of the _kerberos._udp.idm.example.com SRV record with the value of 10 50 88 idm.example.com . This sets the following values: It sets the priority of the service to 10. It sets the weight of the service to 50. It sets the port to be used by the service to 88. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-SRV-record-is-present.yml Ansible playbook file. For example: Open the ensure-SRV-record-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable to _kerberos._udp.idm.example.com . Set the srv_rec variable to '10 50 88 idm.example.com' . Set the zone_name variable to idm.example.com . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS records in IdM The README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory Sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory
[ "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-A-and-AAAA-records-are-present.yml ensure-A-and-AAAA-records-are-present-copy.yml", "--- - name: Ensure A and AAAA records are present hosts: ipaserver become: true gather_facts: false tasks: # Ensure A and AAAA records are present - name: Ensure that 'host1' has A and AAAA records. ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" zone_name: idm.example.com records: - name: host1 a_ip_address: 192.168.122.123 - name: host1 aaaa_ip_address: ::1", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-A-and-AAAA-records-are-present-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-dnsrecord-with-reverse-is-present.yml ensure-dnsrecord-with-reverse-is-present-copy.yml", "--- - name: Ensure DNS Record is present. hosts: ipaserver become: true gather_facts: false tasks: # Ensure that dns record is present - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host1 zone_name: idm.example.com ip_address: 192.168.122.45 create_reverse: true state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-dnsrecord-with-reverse-is-present-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-presence-multiple-records.yml ensure-presence-multiple-records-copy.yml", "--- - name: Test multiple DNS Records are present. hosts: ipaserver become: true gather_facts: false tasks: # Ensure that multiple dns records are present - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" records: - name: host1 zone_name: idm.example.com a_rec: 192.168.122.112 a_rec: 192.168.122.122 - name: host1 zone_name: idm.example.com aaaa_rec: ::1", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-presence-multiple-records-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-CNAME-record-is-present.yml ensure-CNAME-record-is-present-copy.yml", "--- - name: Ensure that 'www.idm.example.com' and 'ftp.idm.example.com' CNAME records point to 'host03.idm.example.com'. hosts: ipaserver become: true gather_facts: false tasks: - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" zone_name: idm.example.com records: - name: www cname_hostname: host03 - name: ftp cname_hostname: host03", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-CNAME-record-is-present.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-SRV-record-is-present.yml ensure-SRV-record-is-present-copy.yml", "--- - name: Test multiple DNS Records are present. hosts: ipaserver become: true gather_facts: false tasks: # Ensure a SRV record is present - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" name: _kerberos._udp.idm.example.com srv_rec: '10 50 88 idm.example.com' zone_name: idm.example.com state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-SRV-record-is-present.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/using-ansible-to-manage-dns-records-in-idm_configuring-and-managing-idm
Chapter 10. Multi-Source Models
Chapter 10. Multi-Source Models 10.1. Multi-Source Models Multi-source models can be used to quickly access data in multiple sources with homogeneous metadata. When you have multiple instances of data that are using identical schema (horizontal sharding), JBoss Data Virtualization can help you aggregate data across all the instances, using "multi-source" models. In this scenario, instead of creating/importing a model for every data source, user must define one source model that represents the schema and configure multiple data "sources" underneath it. During runtime, when a query issued against this model, the query engine analyzes the information and gathers the required data from all the sources configured and aggregates the results and provides in a single result set. Since all sources use the same physical metadata, this feature is most appropriate for accessing the same source type with multiple instances.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/chap-multi-source_models
12.3. JSON Representation of a Storage Domain
12.3. JSON Representation of a Storage Domain Example 12.2. A JSON representation of a storage domain
[ "{ \"storage_domain\" : [ { \"type\" : \"data\", \"master\" : \"false\", \"storage\" : { \"address\" : \"192.0.2.0\", \"type\" : \"nfs\", \"path\" : \"/storage/user/nfs\" }, \"available\" : 193273528320, \"used\" : 17179869184, \"committed\" : 0, \"storage_format\" : \"v3\", \"name\" : \"NFS_01\", \"href\" : \"/ovirt-engine/api/storagedomains/8827b158-6d2e-442d-a7ee-c6fd4718aaba\", \"id\" : \"8827b158-6d2e-442d-a7ee-c6fd4718aaba\", \"link\" : [ { \"href\" : \"/ovirt-engine/api/storagedomains/8827b158-6d2e-442d-a7ee-c6fd4718aaba/permissions\", \"rel\" : \"permissions\" }, { \"href\" : \"/ovirt-engine/api/storagedomains/8827b158-6d2e-442d-a7ee-c6fd4718aaba/disks\", \"rel\" : \"disks\" }, { \"href\" : \"/ovirt-engine/api/storagedomains/8827b158-6d2e-442d-a7ee-c6fd4718aaba/storageconnections\", \"rel\" : \"storageconnections\" }, { \"href\" : \"/ovirt-engine/api/storagedomains/8827b158-6d2e-442d-a7ee-c6fd4718aaba/disksnapshots\", \"rel\" : \"disksnapshots\" }, { \"href\" : \"/ovirt-engine/api/storagedomains/8827b158-6d2e-442d-a7ee-c6fd4718aaba/diskprofiles\", \"rel\" : \"diskprofiles\" } ] } ] }" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/json_representation_of_a_storage_domain
Chapter 1. Overview of Builds
Chapter 1. Overview of Builds Builds is an extensible build framework based on the Shipwright project , which you can use to build container images on an OpenShift Container Platform cluster. You can build container images from source code and Dockerfiles by using image build tools, such as Source-to-Image (S2I) and Buildah. You can create and apply build resources, view logs of build runs, and manage builds in your OpenShift Container Platform namespaces. Builds includes the following capabilities: Standard Kubernetes-native API for building container images from source code and Dockerfiles Support for Source-to-Image (S2I) and Buildah build strategies Extensibility with your own custom build strategies Execution of builds from source code in a local directory Shipwright CLI for creating and viewing logs, and managing builds on the cluster Integrated user experience with the Developer perspective of the OpenShift Container Platform web console Note Because Builds releases on a different cadence from OpenShift Container Platform, the Builds documentation is now available as a separate documentation set at builds for Red Hat OpenShift .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/builds_using_shipwright/overview-openshift-builds
Chapter 39. Maven settings and repositories for Red Hat Decision Manager
Chapter 39. Maven settings and repositories for Red Hat Decision Manager When you create a Red Hat Decision Manager project, Business Central uses the Maven repositories that are configured for Business Central. You can use the Maven global or user settings to direct all Red Hat Decision Manager projects to retrieve dependencies from the public Red Hat Decision Manager repository by modifying the Maven project object model (POM) file ( pom.xml ). You can also configure Business Central and KIE Server to use an external Maven repository or prepare a Maven mirror for offline use. For more information about Red Hat Decision Manager packaging and deployment options, see Packaging and deploying an Red Hat Decision Manager project . 39.1. Configuring Maven using the project configuration file ( pom.xml ) To use Maven for building and managing your Red Hat Decision Manager projects, you must create and configure the POM file ( pom.xml ). This file holds configuration information for your project. For more information, see Apache Maven Project . Procedure Generate a Maven project. A pom.xml file is automatically generated when you create a Maven project. Edit the pom.xml file to add more dependencies and new repositories. Maven downloads all of the JAR files and the dependent JAR files from the Maven repository when you compile and package your project. Find the schema for the pom.xml file at http://maven.apache.org/maven-v4_0_0.xsd . For more information about POM files, see Apache Maven Project POM . 39.2. Modifying the Maven settings file Red Hat Decision Manager uses Maven settings.xml file to configure it's Maven execution. You must create and activate a profile in the settings.xml file and declare the Maven repositories used by your Red Hat Decision Manager projects. For information about the Maven settings.xml file, see the Apache Maven Project Setting Reference . Procedure In the settings.xml file, declare the repositories that your Red Hat Decision Manager projects use. Usually, this is either the online Red Hat Decision Manager Maven repository or the Red Hat Decision Manager Maven repository that you download from the Red Hat Customer Portal and any repositories for custom artifacts that you want to use. Ensure that Business Central or KIE Server is configured to use the settings.xml file. For example, specify the kie.maven.settings.custom=<SETTINGS_FILE_PATH> property where <SETTINGS_FILE_PATH> is the path to the settings.xml file. On Red Hat JBoss Web Server, for KIE Server add -Dkie.maven.settings.custom=<SETTINGS_FILE_PATH> to the CATALINA_OPTS section of the setenv.sh (Linux) or setenv.bat (Windows) file. For standalone Business Central, enter the following command: 39.3. Adding Maven dependencies for Red Hat Decision Manager To use the correct Maven dependencies in your Red Hat Decision Manager project, add the Red Hat Business Automation bill of materials (BOM) files to the project's pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. For more information about the Red Hat Business Automation BOM, see What is the mapping between Red Hat Process Automation Manager and the Maven library version? . Procedure Declare the Red Hat Business Automation BOM in the pom.xml file: <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Your dependencies --> </dependencies> Declare dependencies required for your project in the <dependencies> tag. After you import the product BOM into your project, the versions of the user-facing product dependencies are defined so you do not need to specify the <version> sub-element of these <dependency> elements. However, you must use the <dependency> element to declare dependencies which you want to use in your project. For standalone projects that are not authored in Business Central, specify all dependencies required for your projects. In projects that you author in Business Central, the basic decision engine dependencies are provided automatically by Business Central. For a basic Red Hat Decision Manager project, declare the following dependencies, depending on the features that you want to use: For a basic Red Hat Decision Manager project, declare the following dependencies: Embedded decision engine dependencies <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> </dependency> <!-- Dependency for persistence support. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> </dependency> <!-- Dependencies for decision tables, templates, and scorecards. For other assets, declare org.drools:business-central-models-* dependencies. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-decisiontables</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-templates</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-scorecards</artifactId> </dependency> <!-- Dependency for loading KJARs from a Maven repository using KieScanner. --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> </dependency> To use KIE Server, declare the following dependencies: Client application KIE Server dependencies <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> </dependency> To create a remote client for Red Hat Decision Manager, declare the following dependency: Client dependency <dependency> <groupId>org.uberfire</groupId> <artifactId>uberfire-rest-client</artifactId> </dependency> When creating a JAR file that includes assets, such as rules and process definitions, specify the packaging type for your Maven project as kjar and use org.kie:kie-maven-plugin to process the kjar packaging type located under the <project> element. In the following example, USD{kie.version} is the Maven library version listed in What is the mapping between Red Hat Decision Manager and the Maven library version? : <packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{kie.version}</version> <extensions>true</extensions> </plugin> </plugins> </build> 39.4. Preparing a Maven mirror repository for offline use If your Red Hat Process Automation Manager deployment does not have outgoing access to the public Internet, you must prepare a Maven repository with a mirror of all the necessary artifacts and make this repository available to your environment. Note You do not need to complete this procedure if your Red Hat Process Automation Manager deployment is connected to the Internet. Prerequisites A computer that has outgoing access to the public Internet is available. Procedure On the computer that has an outgoing connection to the public Internet, complete the following steps: Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download and extract the Red Hat Process Automation Manager 7.13.5 Offliner Content List ( rhpam-7.13.5-offliner.zip ) product deliverable file. Extract the contents of the rhpam-7.13.5-offliner.zip file into any directory. Change to the directory and enter the following command: This command creates the repository subdirectory and downloads the necessary artifacts into this subdirectory. This is the mirror repository. If a message reports that some downloads have failed, run the same command again. If downloads fail again, contact Red Hat support. If you developed services outside of Business Central and they have additional dependencies, add the dependencies to the mirror repository. If you developed the services as Maven projects, you can use the following steps to prepare these dependencies automatically. Complete the steps on the computer that has an outgoing connection to the public Internet. Create a backup of the local Maven cache directory ( ~/.m2/repository ) and then clear the directory. Build the source of your projects using the mvn clean install command. For every project, enter the following command to ensure that Maven downloads all runtime dependencies for all the artifacts generated by the project: Replace /path/to/project/pom.xml with the path of the pom.xml file of the project. Copy the contents of the local Maven cache directory ( ~/.m2/repository ) to the repository subdirectory that was created. Copy the contents of the repository subdirectory to a directory on the computer on which you deployed Red Hat Process Automation Manager. This directory becomes the offline Maven mirror repository. Create and configure a settings.xml file for your Red Hat Process Automation Manager deployment as described in Section 39.2, "Modifying the Maven settings file" . Make the following changes in the settings.xml file: Under the <profile> tag, if a <repositories> or <pluginRepositores> tag is missing, add the missing tags. Under <repositories> add the following content: <repository> <id>offline-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> Replace /path/to/repo with the full path to the local Maven mirror repository directory. Under <pluginRepositories> add the following content: <repository> <id>offline-plugin-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> Replace /path/to/repo with the full path to the local Maven mirror repository directory.
[ "java -jar rhpam-7.13.5-business-central-standalone.jar --cli-script=application-script.cli -Dkie.maven.settings.custom=<SETTINGS_FILE_PATH>", "<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Your dependencies --> </dependencies>", "<dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> </dependency> <!-- Dependency for persistence support. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> </dependency> <!-- Dependencies for decision tables, templates, and scorecards. For other assets, declare org.drools:business-central-models-* dependencies. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-decisiontables</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-templates</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-scorecards</artifactId> </dependency> <!-- Dependency for loading KJARs from a Maven repository using KieScanner. --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> </dependency>", "<dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> </dependency>", "<dependency> <groupId>org.uberfire</groupId> <artifactId>uberfire-rest-client</artifactId> </dependency>", "<packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{kie.version}</version> <extensions>true</extensions> </plugin> </plugins> </build>", "./offline-repo-builder.sh offliner.txt", "mvn -e -DskipTests dependency:go-offline -f /path/to/project/pom.xml --batch-mode -Djava.net.preferIPv4Stack=true", "<repository> <id>offline-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository>", "<repository> <id>offline-plugin-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository>" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/maven-repo-using-con_install-on-jws
5.193. mingw32-qpid-cpp
5.193. mingw32-qpid-cpp 5.193.1. RHBA-2012:0756 - mingw32-qpid-cpp bug fix update Updated mingw32-qpid-cpp packages that fix three bugs are now available for Red Hat Enterprise Linux 6. The mingw32-qpid-cpp packages provide a message broker daemon that receives, stores, and routes messages by means of runtime libraries for Advanced Message Queuing Protocol (AMQP) client applications developed using the Qpid C++ language. Bug Fixes BZ# 751349 Previously, HTML documentation was required by the mingw32-qpid-cpp package builds, but was not available. Consequently, the following error message was displayed during the build process: As the HTML documentation is not considered essential, this update disables its generation. As a result, the aforementioned error message is not displayed during the build process of the mingw32-qpid-cpp package. BZ# 807345 Previously, mingw32-qpid-cpp had an unnecessary dependency on the mingw32-gnutls package. This update removes the dependency. BZ# 813537 Previously, mingw32-qpid-cpp had an unnecessary dependency on the mingw32-libxslt package. This update removes the dependency. All users of mingw32-qpid-cpp are advised to upgrade to these updated packages, which fix these bugs.
[ "CMake Error at docs/api/cmake_install.cmake:31 (FILE): file INSTALL cannot find file \"/usr/src/redhat/BUILD/qpid-cpp-0.12/build/docs/api/html\" to install." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/mingw32-qpid-cpp
Chapter 6. Using a reverse proxy
Chapter 6. Using a reverse proxy Distributed environments frequently require the use of a reverse proxy. Red Hat build of Keycloak offers several options to securely integrate with such environments. 6.1. Configure the reverse proxy headers Red Hat build of Keycloak will parse the reverse proxy headers based on the proxy-headers option which accepts several values: By default if the option is not specified, no reverse proxy headers are parsed. forwarded enables parsing of the Forwarded header as per RFC7239 . xforwarded enables parsing of non-standard X-Forwarded-* headers, such as X-Forwarded-For , X-Forwarded-Proto , X-Forwarded-Host , and X-Forwarded-Port . For example: bin/kc.[sh|bat] start --proxy-headers forwarded Warning If either forwarded or xforwarded is selected, make sure your reverse proxy properly sets and overwrites the Forwarded or X-Forwarded-* headers respectively. To set these headers, consult the documentation for your reverse proxy. Misconfiguration will leave Red Hat build of Keycloak exposed to security vulnerabilities. Take extra precautions to ensure that the client address is properly set by your reverse proxy via the Forwarded or X-Forwarded-For headers. If this header is incorrectly configured, rogue clients can set this header and trick Red Hat build of Keycloak into thinking the client is connected from a different IP address than the actual address. This precaution can be more critical if you do any deny or allow listing of IP addresses. Note When using the xforwarded setting, the X-Forwarded-Port takes precedence over any port included in the X-Forwarded-Host . 6.2. Proxy modes Note The support for setting proxy modes is deprecated and will be removed in a future Red Hat build of Keycloak release. Consider configuring accepted reverse proxy headers instead as described in the chapter above. For migration instructions consult the Upgrading Guide . For Red Hat build of Keycloak, your choice of proxy modes depends on the TLS termination in your environment. The following proxy modes are available: edge Enables communication through HTTP between the proxy and Red Hat build of Keycloak. This mode is suitable for deployments with a highly secure internal network where the reverse proxy keeps a secure connection (HTTP over TLS) with clients while communicating with Red Hat build of Keycloak using HTTP. reencrypt Requires communication through HTTPS between the proxy and Red Hat build of Keycloak. This mode is suitable for deployments where internal communication between the reverse proxy and Red Hat build of Keycloak should also be protected. Different keys and certificates are used on the reverse proxy as well as on Red Hat build of Keycloak. passthrough The proxy forwards the HTTPS connection to Red Hat build of Keycloak without terminating TLS. The secure connections between the server and clients are based on the keys and certificates used by the Red Hat build of Keycloak server. When in edge or reencrypt proxy mode, Red Hat build of Keycloak will parse the following headers and expects the reverse proxy to set them: Forwarded as per RFC7239 Non-standard X-Forwarded-* , such as X-Forwarded-For , X-Forwarded-Proto , X-Forwarded-Host , and X-Forwarded-Port 6.2.1. Configure the proxy mode in Red Hat build of Keycloak To select the proxy mode, enter this command: bin/kc.[sh|bat] start --proxy <mode> 6.3. Different context-path on reverse proxy Red Hat build of Keycloak assumes it is exposed through the reverse proxy under the same context path as Red Hat build of Keycloak is configured for. By default Red Hat build of Keycloak is exposed through the root ( / ), which means it expects to be exposed through the reverse proxy on / as well. You can use hostname-path or hostname-url in these cases, for example using --hostname-path=/auth if Red Hat build of Keycloak is exposed through the reverse proxy on /auth . Alternatively you can also change the context path of Red Hat build of Keycloak itself to match the context path for the reverse proxy using the http-relative-path option, which will change the context-path of Red Hat build of Keycloak itself to match the context path used by the reverse proxy. 6.4. Trust the proxy to set hostname By default, Red Hat build of Keycloak needs to know under which hostname it will be called. If your reverse proxy is configured to check for the correct hostname, you can set Red Hat build of Keycloak to accept any hostname. bin/kc.[sh|bat] start --proxy-headers=forwarded|xforwarded --hostname-strict=false 6.5. Enable sticky sessions Typical cluster deployment consists of the load balancer (reverse proxy) and 2 or more Red Hat build of Keycloak servers on private network. For performance purposes, it may be useful if load balancer forwards all requests related to particular browser session to the same Red Hat build of Keycloak backend node. The reason is, that Red Hat build of Keycloak is using Infinispan distributed cache under the covers for save data related to current authentication session and user session. The Infinispan distributed caches are configured with two owners by default. That means that particular session is primarily stored on two cluster nodes and the other nodes need to lookup the session remotely if they want to access it. For example if authentication session with ID 123 is saved in the Infinispan cache on node1, and then node2 needs to lookup this session, it needs to send the request to node1 over the network to return the particular session entity. It is beneficial if particular session entity is always available locally, which can be done with the help of sticky sessions. The workflow in the cluster environment with the public frontend load balancer and two backend Red Hat build of Keycloak nodes can be like this: User sends initial request to see the Red Hat build of Keycloak login screen This request is served by the frontend load balancer, which forwards it to some random node (eg. node1). Strictly said, the node doesn't need to be random, but can be chosen according to some other criterias (client IP address etc). It all depends on the implementation and configuration of underlying load balancer (reverse proxy). Red Hat build of Keycloak creates authentication session with random ID (eg. 123) and saves it to the Infinispan cache. Infinispan distributed cache assigns the primary owner of the session based on the hash of session ID. See Infinispan documentation for more details around this. Let's assume that Infinispan assigned node2 to be the owner of this session. Red Hat build of Keycloak creates the cookie AUTH_SESSION_ID with the format like <session-id>.<owner-node-id> . In our example case, it will be 123.node2 . Response is returned to the user with the Red Hat build of Keycloak login screen and the AUTH_SESSION_ID cookie in the browser From this point, it is beneficial if load balancer forwards all the requests to the node2 as this is the node, who is owner of the authentication session with ID 123 and hence Infinispan can lookup this session locally. After authentication is finished, the authentication session is converted to user session, which will be also saved on node2 because it has same ID 123 . The sticky session is not mandatory for the cluster setup, however it is good for performance for the reasons mentioned above. You need to configure your loadbalancer to sticky over the AUTH_SESSION_ID cookie. How exactly do this is dependent on your loadbalancer. If your proxy supports session affinity without processing cookies from backend nodes, you should set the spi-sticky-session-encoder-infinispan-should-attach-route option to false in order to avoid attaching the node to cookies and just rely on the reverse proxy capabilities. bin/kc.[sh|bat] start --spi-sticky-session-encoder-infinispan-should-attach-route=false By default, the spi-sticky-session-encoder-infinispan-should-attach-route option value is true so that the node name is attached to cookies to indicate to the reverse proxy the node that subsequent requests should be sent to. 6.5.1. Exposing the administration console By default, the administration console URLs are created solely based on the requests to resolve the proper scheme, host name, and port. For instance, if you are using the edge proxy mode and your proxy is misconfigured, backend requests from your TLS termination proxy are going to use plain HTTP and potentially cause the administration console from being accessible because URLs are going to be created using the http scheme and the proxy does not support plain HTTP. In order to proper expose the administration console, you should make sure that your proxy is setting the X-Forwarded-* headers herein mentioned in order to create URLs using the scheme, host name, and port, being exposed by your proxy. 6.5.2. Exposed path recommendations When using a reverse proxy, Red Hat build of Keycloak only requires certain paths need to be exposed. The following table shows the recommended paths to expose. Red Hat build of Keycloak Path Reverse Proxy Path Exposed Reason / - No When exposing all paths, admin paths are exposed unnecessarily. /admin/ - No Exposed admin paths lead to an unnecessary attack vector. /js/ - Yes (see note below) Access to keycloak.js needed for "internal" clients, e.g. the account console /welcome/ - No No need exists to expose the welcome page after initial installation. /realms/ /realms/ Yes This path is needed to work correctly, for example, for OIDC endpoints. /resources/ /resources/ Yes This path is needed to serve assets correctly. It may be served from a CDN instead of the Red Hat build of Keycloak path. /robots.txt /robots.txt Yes Search engine rules /metrics - No Exposed metrics lead to an unnecessary attack vector. /health - No Exposed health checks lead to an unnecessary attack vector. Note As it's true that the js path is needed for internal clients like the account console, it's good practice to use keycloak.js from a JavaScript package manager like npm or yarn for your external clients. We assume you run Red Hat build of Keycloak on the root path / on your reverse proxy/gateway's public API. If not, prefix the path with your desired one. 6.5.3. Enabling client certificate lookup When the proxy is configured as a TLS termination proxy the client certificate information can be forwarded to the server through specific HTTP request headers and then used to authenticate clients. You are able to configure how the server is going to retrieve client certificate information depending on the proxy you are using. Warning Client certificate lookup via a proxy header for X.509 authentication is considered security-sensitive. If misconfigured, a forged client certificate header can be used for authentication. Extra precautions need to be taken to ensure that the client certificate information can be trusted when passed via a proxy header. Double check your use case needs reencrypt or edge TLS termination which implies using a proxy header for client certificate lookup. TLS passthrough is recommended as a more secure option when X.509 authentication is desired as it does not require passing the certificate via a proxy header. Client certificate lookup from a proxy header is applicable only to reencrypt and edge TLS termination. If passthrough is not an option, implement the following security measures: Configure your network so that Red Hat build of Keycloak is isolated and can accept connections only from the proxy. Make sure that the proxy overwrites the header that is configured in spi-x509cert-lookup-<provider>-ssl-client-cert option. Pay extra attention to the spi-x509cert-lookup-<provider>-trust-proxy-verification setting. Make sure you enable it only if you can trust your proxy to verify the client certificate. Setting spi-x509cert-lookup-<provider>-trust-proxy-verification=true without the proxy verifying the client certificate chain will expose Red Hat build of Keycloak to security vulnerability when a forged client certificate can be used for authentication. The server supports some of the most commons TLS termination proxies such as: Proxy Provider Apache HTTP Server apache HAProxy haproxy NGINX nginx To configure how client certificates are retrieved from the requests you need to: Enable the corresponding proxy provider bin/kc.[sh|bat] build --spi-x509cert-lookup-provider=<provider> Configure the HTTP headers bin/kc.[sh|bat] start --spi-x509cert-lookup-<provider>-ssl-client-cert=SSL_CLIENT_CERT --spi-x509cert-lookup-<provider>-ssl-cert-chain-prefix=CERT_CHAIN --spi-x509cert-lookup-<provider>-certificate-chain-length=10 When configuring the HTTP headers, you need to make sure the values you are using correspond to the name of the headers forwarded by the proxy with the client certificate information. The available options for configuring a provider are: Option Description ssl-client-cert The name of the header holding the client certificate ssl-cert-chain-prefix The prefix of the headers holding additional certificates in the chain and used to retrieve individual certificates accordingly to the length of the chain. For instance, a value CERT_CHAIN will tell the server to load additional certificates from headers CERT_CHAIN_0 to CERT_CHAIN_9 if certificate-chain-length is set to 10 . certificate-chain-length The maximum length of the certificate chain. trust-proxy-verification Enable trusting NGINX proxy certificate verification, instead of forwarding the certificate to Red Hat build of Keycloak and verifying it in Red Hat build of Keycloak. 6.5.3.1. Configuring the NGINX provider The NGINX SSL/TLS module does not expose the client certificate chain. Red Hat build of Keycloak's NGINX certificate lookup provider rebuilds it by using the Red Hat build of Keycloak truststore. If you are using this provider, see Configuring trusted certificates for how to configure a Red Hat build of Keycloak Truststore. 6.6. Relevant options Value hostname-path This should be set if proxy uses a different context-path for Keycloak. CLI: --hostname-path Env: KC_HOSTNAME_PATH hostname-url Set the base URL for frontend URLs, including scheme, host, port and path. CLI: --hostname-url Env: KC_HOSTNAME_URL http-relative-path 🛠 Set the path relative to / for serving resources. The path must start with a / . CLI: --http-relative-path Env: KC_HTTP_RELATIVE_PATH / (default) proxy The proxy address forwarding mode if the server is behind a reverse proxy. CLI: --proxy Env: KC_PROXY DEPRECATED. Use: proxy-headers . none (default), edge , reencrypt , passthrough proxy-headers The proxy headers that should be accepted by the server. Misconfiguration might leave the server exposed to security vulnerabilities. Takes precedence over the deprecated proxy option. CLI: --proxy-headers Env: KC_PROXY_HEADERS forwarded , xforwarded
[ "bin/kc.[sh|bat] start --proxy-headers forwarded", "bin/kc.[sh|bat] start --proxy <mode>", "bin/kc.[sh|bat] start --proxy-headers=forwarded|xforwarded --hostname-strict=false", "bin/kc.[sh|bat] start --spi-sticky-session-encoder-infinispan-should-attach-route=false", "bin/kc.[sh|bat] build --spi-x509cert-lookup-provider=<provider>", "bin/kc.[sh|bat] start --spi-x509cert-lookup-<provider>-ssl-client-cert=SSL_CLIENT_CERT --spi-x509cert-lookup-<provider>-ssl-cert-chain-prefix=CERT_CHAIN --spi-x509cert-lookup-<provider>-certificate-chain-length=10" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_guide/reverseproxy-
Chapter 2. Get Started with Containers
Chapter 2. Get Started with Containers 2.1. Install and Deploy an Apache Web Server Container 2.1.1. Overview A Web server is one of the most basic examples used to illustrate how containers work. The procedure in this topic does the following: Builds an Apache (httpd) Web server inside a container Exposes the service on port 80 of the host Serves a simple index.html file Displays data from a backend server (needs additional MariaDB container described later) 2.1.2. Creating and running the Apache Web Server Container Install system : Install a RHEL 7 or RHEL Atomic system that includes the docker package and start the docker service. Pull image : Pull the rhel7 image by typing the following: Get tarball with supporting files : Download the tarball file attached to this article (get it here: web_cont_3.tgz ), download it to a new mywebcontainer directory, and untar it as follows: Modify action CGI script : Edit the action file as needed, which will be used to get data from the backend database server container. This script assumes that the docker0 interface on the host system is at IP address 172.17.42.1 , you can login to the database with the dbuser1 user account and redhat as the password, and use the database named gss . If that is the IP address and you use the database container described later, you don't need to modify this script. (You can also just ignore this script and just use the Web server to get HTML content.) Check the Dockerfile : Modify the Dockerfile file in the ~/mywebcontainer directory as needed (perhaps only modify Maintainer_Name to add your name). Here are the contents of that file: Build Web server container: From the directory containing the Dockerfile file and other content, type the following: Start the Web server container: To start the container image, run the following command: Test the Web server container: To check that the Web server is operational, run the first curl command below. If you have the backend database container running, try the second command: If you have a Web browser installed on the localhost, you can open a Web browser to see as better representation of the few lines of output. Just open the browser to this URL: http://localhost/cgi-bin/action 2.1.3. Tips for this container Here are some tips to help you use the Web Server container: Modify for MariaDB: To use this container with the MariaDB container (described later), you may need to edit the action script and change the IP address from 172.17.42.1 to the host IP on the docker0 interface. To find what that address is on your host, type the following: Adding content: You can include your own content, mounted from the local host, by using the -v option on the docker run command line. For example: 2.1.4. Attachments Apache Web container tar file: action CGI script and Dockerfile 2.2. Install and Deploy a MariaDB Container 2.2.1. Overview Using MariaDB, you can set up a basic database in a container that can be accessed by other applications. The procedure in this topic does the following: Builds a MariaDB database server inside a docker formatted container Exposes the service on port 3306 of the host Starts up the database service to share a few pieces of information Allows a script from Web server to query the database (needs additional Web server container described later) Offers tips on how to use and extend this container 2.2.2. Creating and running the MariaDB Database Server Container Install system: Install a Red Hat Enterprise Linux 7 or Red Hat Enterprise Linux Atomic Host system that includes the docker package and start the docker service. Pull image: Pull the rhel7 image by typing the following: Get tarball with supporting files: Download the tarball file attached to this article ( mariadb_cont_2.tgz ), download it to a new mydbcontainer directory, and untar it as follows: Check the Dockerfile: Modify the Dockerfile file in the ~/mydbcontainer directory as needed (perhaps only modify Maintainer_Name to add your name). Here are the contents of that file: Build database server container: From the directory containing the Dockerfile file and other content, type the following: Start the database server container: To start the container image, run the following command: Test the database server container: Assuming the docker0 interface on the host is 172.17.42.1 (yours may be different), check that the database container is operational by running the nc command (in RHEL 7, type yum install nc to get it) as shown here: 2.2.3. Tips for this container Here are some tips to help you use the Web Server container: Adding your own database: You can include your own MariaDB content by copying your database file to the build directory and changing the name of the database file from gss_db.sql to the name of your database (in several places in the Dockerfile file). Orchestrate containers: A better way to manage this container with other containers is to use Kubernetes to orchestrate them into pods. 2.2.4. Attachments Tar file containing gss_db.sql database and Dockerfile files for MariaDB container
[ "docker pull rhel7:latest", "mkdir ~/mywebcontainer cp web_cont*.tgz ~/mywebcontainer cd ~/mywebcontainer tar xvf web_cont*.tgz action Dockerfile", "#!/usr/bin/python -*- coding: utf-8 -*- import MySQLdb as mdb import os con = mdb.connect(os.getenv('DB_SERVICE_SERVICE_HOST','172.17.42.1'), 'dbuser1', 'redhat', 'gss') with con: cur = con.cursor() cur.execute(\"SELECT MESSAGE FROM atomic_training\") rows = cur.fetchall() print 'Content-type:text/html\\r\\n\\r\\n' print '<html>' print '<head>' print '<title>My Application</title>' print '</head>' print '<body>' for row in rows: print '<h2>' + row[0] + '</h2>' print '</body>' print '</html>' con.close()", "Webserver container with CGI python script Using RHEL 7 base image and Apache Web server Version 1 Pull the rhel image from the local registry FROM rhel7:latest USER root MAINTAINER Maintainer_Name Fix per https://bugzilla.redhat.com/show_bug.cgi?id=1192200 RUN yum -y install deltarpm yum-utils --disablerepo=*-eus-* --disablerepo=*-htb-* *-sjis-* --disablerepo=*-ha-* --disablerepo=*-rt-* --disablerepo=*-lb-* --disablerepo=*-rs-* --disablerepo=*-sap-* RUN yum-config-manager --disable *-eus-* *-htb-* *-ha-* *-rt-* *-lb-* *-rs-* *-sap-* *-sjis* > /dev/null Update image RUN yum update -y RUN yum install httpd procps-ng MySQL-python -y Add configuration file ADD action /var/www/cgi-bin/action RUN echo \"PassEnv DB_SERVICE_SERVICE_HOST\" >> /etc/httpd/conf/httpd.conf RUN chown root:apache /var/www/cgi-bin/action RUN chmod 755 /var/www/cgi-bin/action RUN echo \"The Web Server is Running\" > /var/www/html/index.html EXPOSE 80 Start the service CMD mkdir /run/httpd ; /usr/sbin/httpd -D FOREGROUND", "docker build -t webwithdb . Sending build context to Docker daemon 4.096 kB Sending build context to Docker daemon Step 0 : FROM rhel7:latest ---> bef54b8f8a2f Step 1 : USER root ---> Running in 00c28d347131 ---> cd7ef0fcaf55", "docker run -d -p 80:80 --name=mywebwithdb webwithdb", "curl http://localhost/index.html The Web Server is Running curl http://localhost/cgi-bin/action <html> <head> <title>My Application</title> </head> <body> <h2>RedHat rocks</h2> <h2>Success</h2> </body> </html> </tt></pre>", "ip a | grep docker0 | grep inet inet 172.17.42.1/16 scope global docker0", "docker run -d -p 80:80 -v /var/www/html:/var/www/html --name=mywebwithdb webwithdb", "docker pull rhel7:latest", "mkdir ~/mydbcontainer cp mariadb_cont*.tgz ~/mydbcontainer cd ~/mydbcontainer tar xvf mariadb_cont*.tgz gss_db.sql Dockerfile", "Database container with simple data for a Web application Using RHEL 7 base image and MariahDB database Version 1 Pull the rhel image from the local repository FROM rhel7:latest USER root MAINTAINER Maintainer_Name Update image RUN yum update -y --disablerepo=*-eus-* --disablerepo=*-htb-* --disablerepo=*sjis* --disablerepo=*-ha-* --disablerepo=*-rt-* --disablerepo=*-lb-* --disablerepo=*-rs-* --disablerepo=*-sap-* RUN yum-config-manager --disable *-eus-* *-htb-* *-ha-* *-rt-* *-lb-* *-rs-* *-sap-* *-sjis-* > /dev/null Add Mariahdb software RUN yum -y install net-tools mariadb-server Set up Mariahdb database ADD gss_db.sql /tmp/gss_db.sql RUN /usr/libexec/mariadb-prepare-db-dir RUN test -d /var/run/mariadb || mkdir /var/run/mariadb; chmod 0777 /var/run/mariadb; /usr/bin/mysqld_safe --basedir=/usr & sleep 10s && /usr/bin/mysqladmin -u root password 'redhat' && mysql --user=root --password=redhat < /tmp/gss_db.sql && mysqladmin shutdown --password=redhat Expose Mysql port 3306 EXPOSE 3306 Start the service CMD test -d /var/run/mariadb || mkdir /var/run/mariadb; chmod 0777 /var/run/mariadb;/usr/bin/mysqld_safe --basedir=/usr", "docker build -t dbforweb . Sending build context to Docker daemon 528.4 kB Sending build context to Docker daemon Step 0 : FROM rhel7:latest ---> bef54b8f8a2f Step 1 : USER root", "docker run -d -p 3306:3306 --name=mydbforweb dbforweb", "nc -v 172.17.42.1 3306 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to 172.17.42.1:3306. R 5.5.44-MariaDB?acL3YF31?X?FWbiiTIO2Kd6mysql_native_password Ctrl-C" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/getting_started_guide/get_started_with_containers
function::task_ancestry
function::task_ancestry Name function::task_ancestry - The ancestry of the given task Synopsis Arguments task task_struct pointer with_time set to 1 to also print the start time of processes (given as a delta from boot time) Description Return the ancestry of the given task in the form of " grandparent_process=>parent_process=>process " .
[ "task_ancestry:string(task:long,with_time:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-ancestry
Chapter 3. Configuring the internal OAuth server
Chapter 3. Configuring the internal OAuth server 3.1. OpenShift Container Platform OAuth server The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request. It then determines what user that identity maps to, creates an access token for that user, and returns the token for use. 3.2. OAuth token request flows and responses The OAuth server supports standard authorization code grant and the implicit grant OAuth authorization flows. When requesting an OAuth token using the implicit grant flow ( response_type=token ) with a client_id configured to request WWW-Authenticate challenges (like openshift-challenging-client ), these are the possible server responses from /oauth/authorize , and how they should be handled: Status Content Client response 302 Location header containing an access_token parameter in the URL fragment ( RFC 6749 section 4.2.2 ) Use the access_token value as the OAuth token. 302 Location header containing an error query parameter ( RFC 6749 section 4.1.2.1 ) Fail, optionally surfacing the error (and optional error_description ) query values to the user. 302 Other Location header Follow the redirect, and process the result using these rules. 401 WWW-Authenticate header present Respond to challenge if type is recognized (e.g. Basic , Negotiate , etc), resubmit request, and process the result using these rules. 401 WWW-Authenticate header missing No challenge authentication is possible. Fail and show response body (which might contain links or details on alternate methods to obtain an OAuth token). Other Other Fail, optionally surfacing response body to the user. 3.3. Options for the internal OAuth server Several configuration options are available for the internal OAuth server. 3.3.1. OAuth token duration options The internal OAuth server generates two kinds of tokens: Token Description Access tokens Longer-lived tokens that grant access to the API. Authorize codes Short-lived tokens whose only use is to be exchanged for an access token. You can configure the default duration for both types of token. If necessary, you can override the duration of the access token by using an OAuthClient object definition. 3.3.2. OAuth grant options When the OAuth server receives token requests for a client to which the user has not previously granted permission, the action that the OAuth server takes is dependent on the OAuth client's grant strategy. The OAuth client requesting token must provide its own grant strategy. You can apply the following default methods: Grant option Description auto Auto-approve the grant and retry the request. prompt Prompt the user to approve or deny the grant. 3.4. Configuring the internal OAuth server's token duration You can configure default options for the internal OAuth server's token duration. Important By default, tokens are only valid for 24 hours. Existing sessions expire after this time elapses. If the default time is insufficient, then this can be modified using the following procedure. Procedure Create a configuration file that contains the token duration options. The following file sets this to 48 hours, twice the default. apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1 1 Set accessTokenMaxAgeSeconds to control the lifetime of access tokens. The default lifetime is 24 hours, or 86400 seconds. This attribute cannot be negative. If set to zero, the default lifetime is used. Apply the new configuration file: Note Because you update the existing OAuth server, you must use the oc apply command to apply the change. USD oc apply -f </path/to/file.yaml> Confirm that the changes are in effect: USD oc describe oauth.config.openshift.io/cluster Example output ... Spec: Token Config: Access Token Max Age Seconds: 172800 ... 3.5. Configuring token inactivity timeout for the internal OAuth server You can configure OAuth tokens to expire after a set period of inactivity. By default, no token inactivity timeout is set. Note If the token inactivity timeout is also configured in your OAuth client, that value overrides the timeout that is set in the internal OAuth server configuration. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have configured an identity provider (IDP). Procedure Update the OAuth configuration to set a token inactivity timeout. Edit the OAuth object: USD oc edit oauth cluster Add the spec.tokenConfig.accessTokenInactivityTimeout field and set your timeout value: apiVersion: config.openshift.io/v1 kind: OAuth metadata: ... spec: tokenConfig: accessTokenInactivityTimeout: 400s 1 1 Set a value with the appropriate units, for example 400s for 400 seconds, or 30m for 30 minutes. The minimum allowed timeout value is 300s . Save the file to apply the changes. Check that the OAuth server pods have restarted: USD oc get clusteroperators authentication Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 145m Check that a new revision of the Kubernetes API server pods has rolled out. This will take several minutes. USD oc get clusteroperators kube-apiserver Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.13.0 True False False 145m If PROGRESSING is showing True , wait a few minutes and try again. Verification Log in to the cluster with an identity from your IDP. Execute a command and verify that it was successful. Wait longer than the configured timeout without using the identity. In this procedure's example, wait longer than 400 seconds. Try to execute a command from the same identity's session. This command should fail because the token should have expired due to inactivity longer than the configured timeout. Example output error: You must be logged in to the server (Unauthorized) 3.6. Customizing the internal OAuth server URL You can customize the internal OAuth server URL by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Warning If you update the internal OAuth server URL, you might break trust from components in the cluster that need to communicate with the OpenShift OAuth server to retrieve OAuth access tokens. Components that need to trust the OAuth server will need to include the proper CA bundle when calling OAuth endpoints. For example: USD oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1 1 For self-signed certificates, the ca.crt file must contain the custom CA certificate, otherwise the login will not succeed. The Cluster Authentication Operator publishes the OAuth server's serving certificate in the oauth-serving-cert config map in the openshift-config-managed namespace. You can find the certificate in the data.ca-bundle.crt key of the config map. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. 3.7. OAuth server metadata Applications running in OpenShift Container Platform might have to discover information about the built-in OAuth server. For example, they might have to discover what the address of the <namespace_route> is without manual configuration. To aid in this, OpenShift Container Platform implements the IETF OAuth 2.0 Authorization Server Metadata draft specification. Thus, any application running inside the cluster can issue a GET request to https://openshift.default.svc/.well-known/oauth-authorization-server to fetch the following information: 1 The authorization server's issuer identifier, which is a URL that uses the https scheme and has no query or fragment components. This is the location where .well-known RFC 5785 resources containing information about the authorization server are published. 2 URL of the authorization server's authorization endpoint. See RFC 6749 . 3 URL of the authorization server's token endpoint. See RFC 6749 . 4 JSON array containing a list of the OAuth 2.0 RFC 6749 scope values that this authorization server supports. Note that not all supported scope values are advertised. 5 JSON array containing a list of the OAuth 2.0 response_type values that this authorization server supports. The array values used are the same as those used with the response_types parameter defined by "OAuth 2.0 Dynamic Client Registration Protocol" in RFC 7591 . 6 JSON array containing a list of the OAuth 2.0 grant type values that this authorization server supports. The array values used are the same as those used with the grant_types parameter defined by OAuth 2.0 Dynamic Client Registration Protocol in RFC 7591 . 7 JSON array containing a list of PKCE RFC 7636 code challenge methods supported by this authorization server. Code challenge method values are used in the code_challenge_method parameter defined in Section 4.3 of RFC 7636 . The valid code challenge method values are those registered in the IANA PKCE Code Challenge Methods registry. See IANA OAuth Parameters . 3.8. Troubleshooting OAuth API events In some cases the API server returns an unexpected condition error message that is difficult to debug without direct access to the API master log. The underlying reason for the error is purposely obscured in order to avoid providing an unauthenticated user with information about the server's state. A subset of these errors is related to service account OAuth configuration issues. These issues are captured in events that can be viewed by non-administrator users. When encountering an unexpected condition server error during OAuth, run oc get events to view these events under ServiceAccount . The following example warns of a service account that is missing a proper OAuth redirect URI: USD oc get events | grep ServiceAccount Example output 1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> Running oc describe sa/<service_account_name> reports any OAuth events associated with the given service account name. USD oc describe sa/proxy | grep -A5 Events Example output Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> The following is a list of the possible event errors: No redirect URI annotations or an invalid URI is specified Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> Invalid route specified Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io "<name>" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>] Invalid reference type specified Reason Message NoSAOAuthRedirectURIs [no kind "<name>" is registered for version "v1", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>] Missing SA tokens Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens
[ "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1", "oc apply -f </path/to/file.yaml>", "oc describe oauth.config.openshift.io/cluster", "Spec: Token Config: Access Token Max Age Seconds: 172800", "oc edit oauth cluster", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: spec: tokenConfig: accessTokenInactivityTimeout: 400s 1", "oc get clusteroperators authentication", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 145m", "oc get clusteroperators kube-apiserver", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.13.0 True False False 145m", "error: You must be logged in to the server (Unauthorized)", "oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "{ \"issuer\": \"https://<namespace_route>\", 1 \"authorization_endpoint\": \"https://<namespace_route>/oauth/authorize\", 2 \"token_endpoint\": \"https://<namespace_route>/oauth/token\", 3 \"scopes_supported\": [ 4 \"user:full\", \"user:info\", \"user:check-access\", \"user:list-scoped-projects\", \"user:list-projects\" ], \"response_types_supported\": [ 5 \"code\", \"token\" ], \"grant_types_supported\": [ 6 \"authorization_code\", \"implicit\" ], \"code_challenge_methods_supported\": [ 7 \"plain\", \"S256\" ] }", "oc get events | grep ServiceAccount", "1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "oc describe sa/proxy | grep -A5 Events", "Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io \"<name>\" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]", "Reason Message NoSAOAuthRedirectURIs [no kind \"<name>\" is registered for version \"v1\", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]", "Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authentication_and_authorization/configuring-internal-oauth
Chapter 4. Infrastructure Migration
Chapter 4. Infrastructure Migration To achieve a successful migration from Ansible Automation Platform 1.2 to Ansible Automation Platform 2, this reference environment takes advantage of the capabilities of the Ansible Automation Platform installer. Using the Ansible Automation Platform installer, you'll be able to backup, import and upgrade to the latest Ansible Automation Platform 2 with a few simple commands. The following sections provide a step-by-step of that process. 4.1. Backup Ansible Automation Platform 1.2 on Environment A As our Ansible Automation Platform 1.2 environment from Environment A contains all of our data, the following creates a backup using the Ansible Automation Platform installer on Environment A . Warning Prior to taking a backup, ensure there are no current running jobs or future jobs scheduled to run. Any data collected after the backup is taken will be LOST . Within Environment A , Login as the ansible user Note This reference environment uses enva_controller1 as the host that contains the Ansible Automation Platform installer directory and binaries. Change to the ansible-tower-setup-3.8.5-X directory Run the Ansible Automation Platform installer to create a backup backup_dest provides the location where to store the backup of your Ansible Automation Platform database use_compression shrinks the size of the Ansible Automation Platform database backup @credentials.yml passes the password variables and their values encrypted via ansible-vault -- --ask-vault-pass asks for the password used to access the encrypted credentials.yml file -b sets the create a backup option to True Note This reference environment takes advantage of encrypted credentials and does not include passwords in plain text. Details in Appendix C, Creating an encrypted credentials.yml file can be found on how to use ansible-vault to encrypt your credentials. Note The backup process may take some time to complete. 4.2. Import Ansible Automation Platform 1.2 database to Environment B With the backup from Environment A created and available, the following imports the backed up Ansible Automation Platform database using the Ansible Automation Platform installer on Environment B . Within Environment B , Login as the ansible user Note This reference environment uses envb_controller1 as the host that contains the Ansible Automation Platform installer directory and binaries. Change to the ansible-tower-setup-3.8.5-X directory Run the Ansible Automation Platform installer to import the Ansible Automation Platform database restore_backup_file provides the location of the backed up Ansible Automation Platform database use_compression is set to True due to compression being used during the backup process -r sets the restore database option to True Note This reference environment takes advantage of encrypted credentials and does not include passwords in plain text. Details in Appendix C, Creating an encrypted credentials.yml file can be found on how to use ansible-vault to encrypt your credentials. Note The import process may take some time to complete. 4.3. Upgrade Environment B to Ansible Automation Platform 2 With the successful import of the Ansible Automation Platform database, the final step in the migration process is to upgrade the Environment B Ansible Automation Platform 1.2 environment to Ansible Automation Platform 2 and expand the architecture of Environment B as shown in Figure 1.4, "Expanded Environment B Architecture Overview" . Within Environment B , Login as the ansible user Note This reference environment uses envb_controller1 as the host that contains the Ansible Automation Platform installer directory and binaries. Download Ansible Automation Platform 2.1.1 Setup tar ansible-automation-platform-setup-2.1.1-1.tar.gz Note For disconnected installs, download the Ansible Automation Platform 2.1.1 Setup Bundle Untar the ansible-automation-platform-setup-2.1.1-1.tar.gz Change directory into ansible-automation-platform-setup-2.1.1-1 Copy the Ansible Automation Platform 1.2 inventory file to the ansible-automation-platform-setup-2.1.1-1 directory Generate an Ansible Automation Platform 2 installation inventory proposal using the Ansible Automation Platform 1.2 inventory file copied over using the Ansible Automation Platform installer Note ansible-core is installed during this process if not already installed. Warning Expect the Ansible Automation Platform installer to fail early in the process when creating the proposal inventory.new.ini . Expected error task look as follows: Proposed inventory.new.ini Note The variables admin_password , pg_password and registry_password are not part of the inventory.new.ini file as it is not recommended to store passwords in plain text. An encrypted credentials.yml file is used instead. With the proposed inventory.new.ini created, modify the file to include the expanded architecture of Environment B that includes hop nodes and execution nodes Expanded Environment B inventory.new.ini 1 Execution Environment images are downloaded and included in your installation. Proper credentials required to download the images. 2 User credential for access to registry_url. 3 control nodes run project and inventory updates and system jobs, but not execution jobs. Execution capabilities are disabled on these nodes. 4 Setting peer relationships between the execution nodes. 5 Setting node type and peer relationships between the hop nodes and execution nodes. 6 Group of execution nodes with direct connection access to the automation controller nodes. 7 Group of hop nodes that route traffic to their corresponding execution nodes. 8 Group of execution nodes accessible via envb_hopnode-sacramento.example.com 9 Group of execution nodes accessible via envb_hopnode-new-delhi.example.com Run the setup.sh to upgrade to Ansible Automation Platform 2 with the following options Verify the Ansible Automation Platform dashboard UI is accessible across all automation controller nodes. Note If you experience 502 error or a Secure Connection Failed when accessing the the Ansible Automation Platform dashboard via any of your automation controllers, this is likely due to one or both of the following issues: Certificate mismatch Incorrect SELinux context for nginx The Appendix D, Post upgrade playbook provides a workaround to fix these issues. A fix is currently being implemented and should be fixed in an upcoming dot release. The cert mismatch issue is fixed in version 2.1.2 and later of Ansible Automation Platform. The incorrect SELinux context for nginx still requires the workaround Ansible Playbook. Check the Appendix D, Post upgrade playbook for more details. This reference environment uses credentials.yml for the following variables: * admin_password * registry_password * pg_password For more information regarding the different values that can be set within your inventory file, visit: Setting up the inventory file 4.4. Configuring instance and instance groups With the upgrade process complete, you'll need to associate your instances to their corresponding instance groups, e.g. sacramento and new-delhi . Select Administration->Instance Groups Click on the sacramento instance group Select the Instances tab Click the blue Associate button Within the Select Instances window, select envb_executionnode-3.example.com envb_executionnode-4.example.com Click Save Repeat the process for the new-delhi instance group and associate the instances below with the new-delhi instance group: envb_executionnode-5.example.com envb_executionnode-6.example.com Once complete, disassociate those instances within the default group. Select Administration->Instance Groups Click on the default instance group Select the Instances tab Select the checkbox to the following instances envb_executionnode-3.example.com envb_executionnode-4.example.com envb_executionnode-5.example.com envb_executionnode-6.example.com Click the blue button labeled Disassociate Confirm the dissociation via the red Disassociate button The default instance group should only contain the following instances: envb_executionnode-1.example.com envb_executionnode-2.example.com With the infrastructure migration complete, the focus shifts to migrating Python virtual environments to user-built execution environments.
[ "ssh ansible@enva_controller1.example.com", "cd /path/to/ansible-tower-setup-3.8.5-X", "./setup.sh -e 'backup_dest=<mount_point>' -e 'use_compression=True' -e @credentials.yml -b", "ssh ansible@envb_controller1.example.com", "cd /path/to/ansible-tower-setup-3.8.5-X", "./setup.sh -e 'restore_backup_file=<mount_point>/tower-backup-latest.tar.gz -e 'use_compression=True' -e @credentials.yml -r -- --ask-vault-pass", "ssh ansible@envb_controller1.example.com", "tar zxvf ansible-automation-platform-setup-2.1.1-1.tar.gz", "cd ansible-automation-platform-setup-2.1.1-1/", "cp /path/to/ansible-tower-setup-3.8.5-X/inventory .", "./setup.sh", "TASK [ansible.automation_platform_installer.check_config_static : Detect pre-2.x inventory and offer a migration] *** fatal: [172.16.58.48 -> localhost]: FAILED! => {\"changed\": false, \"msg\": \"The installer has detected that you are using an inventory format from a version prior to 4.0. We have created an example inventory based on your old style inventory. Please check the file `/home/ansible/aap_install-2.1.1/ansible-automation-platform-setup-bundle-2.1.1-2/inventory.new.ini` and make necessary adjustments so that the file can be used by the installer.\"}", "[all:vars] pg_host='10.0.188.133' pg_port='5432' pg_database='awx' pg_username='awx' pg_sslmode='prefer' ansible_become='true' ansible_user='ansible' tower_package_name='automation-controller' tower_package_version='4.1.1' automationhub_package_name='automation-hub' automationhub_package_version='4.4.1' automation_platform_version='2.1.1' automation_platform_channel='ansible-automation-platform-2.1-for-rhel-8-x86_64-rpms' minimum_ansible_version='2.11' In AAP 2.X [tower] has been renamed to [automationcontroller] Nodes in [automationcontroller] will be hybrid by default, capable of executing user jobs. To specify that any of these nodes should be control-only instead, give them a host var of `node_type=control` [automationcontroller] envb_controller1.example.com envb_controller2.example.com envb_controller3.example.com [database] envb_database.example.com", "[all:vars] pg_host='envb_database.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_sslmode='prefer' ansible_become='true' ansible_user='ansible' tower_package_name='automation-controller' tower_package_version='4.1.1' automationhub_package_name='automation-hub' automationhub_package_version='4.4.1' automation_platform_version='2.1.1' automation_platform_channel='ansible-automation-platform-2.1-for-rhel-8-x86_64-rpms' minimum_ansible_version='2.11' registry_url='registry.redhat.io' 1 registry_username='myusername' 2 In AAP 2.X [tower] has been renamed to [automationcontroller] Nodes in [automationcontroller] will be hybrid by default, capable of executing user jobs. To specify that any of these nodes should be control-only instead, give them a host var of `node_type=control` [automationcontroller] envb_controller1.example.com envb_controller2.example.com envb_controller3.example.com [database] envb_database.example.com [automationcontroller:vars] node_type=control 3 peers=envb_datacenter_execution_nodes,envb_datacenter_hop_nodes 4 [execution_nodes] envb_executionnode-1.example.com envb_executionnode-2.example.com envb_hopnnode-sacramento.example.com node_type=hop peers=sacramento_execution_nodes 5 envb_hopnode-new-delhi.example.com node_type=hop peers=new-delhi_execution_nodes envb_hopnode-dublin.example.com node_type=hop peers=env_hopnode-new-delhi.example.com envb_executionnode-3.example.com envb_executionnode-4.example.com envb_executionnode-5.example.com envb_executionnode-6.example.com [envb_datacenter_execution_nodes] 6 envb_executionnode-1.example.com envb_executionnode-2.example.com [envb_datacenter_hop_nodes] 7 envb_hopnnode-sacramento.example.com envb_hopnode-new-delhi.example.com envb_hopnode-dublin.example.com [sacramento_execution_nodes] 8 envb_executionnode-3.example.com envb_executionnode-4.example.com [new-delhi_execution_nodes] 9 envb_executionnode-5.example.com envb_executionnode-6.example.com", "./setup.sh -i inventory.new.ini -e @credentials.yml -- --ask-vault-pass" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/ansible_automation_platform_1.2_to_2_migration_guide/infra_migration
Chapter 15. message
Chapter 15. message The original log entry text, UTF-8 encoded. This field may be absent or empty if a non-empty structured field is present. See the description of structured for more. Data type text Example value HAPPY
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/logging/message
Preface
Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. Use the procedures in this guide to create backup resources that can be used for recovering your Red Hat Ansible Automation Platform deployment in the event of a failure.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/backup_and_recovery_for_operator_environments/pr01
Managing Transactions on JBoss EAP
Managing Transactions on JBoss EAP Red Hat JBoss Enterprise Application Platform 7.4 Instructions and information for administrators to troubleshoot Red Hat JBoss Enterprise Application Platform transactions. Red Hat Customer Content Services
[ "<system-properties> <property name=\"RecoveryEnvironmentBean.periodicRecoveryPeriod\" value=\"180\"/> <property name=\"RecoveryEnvironmentBean.recoveryBackoffPeriod\" value=\"20\"/> <property name=\"RecoveryEnvironmentBean.periodicRecoveryInitilizationOffset\" value=\"5\"/> <property name=\"RecoveryEnvironmentBean.expiryScanInterval\" value=\"24\"/> </system-properties>", "/subsystem=datasources/data-source= DATASOURCE_NAME :write-attribute(name=jta,value=true)", "reload", "/profile=full/subsystem=iiop-openjdk:write-attribute(name=security,value=identity)", "/profile=full/subsystem=iiop-openjdk:write-attribute(name=transactions, value=full)", "/profile=full/subsystem=transactions:write-attribute(name=jts,value=true)", "/profile=default/subsystem=transactions/log-store=log-store:probe", "ls /profile=default/subsystem=transactions/log-store=log-store/transactions", "/host=master/server=server-one/subsystem=transactions/log-store=log-store:read-children-names(child-type=transactions)", "/profile=default/subsystem=transactions/log-store=log-store/transactions=0\\:ffff7f000001\\:-b66efc2\\:4f9e6f8f\\:9:read-resource", "/profile=default/subsystem=transactions/log-store=log-store/transactions=0\\:ffff7f000001\\:-b66efc2\\:4f9e6f8f\\:9/participants=java\\:\\/JmsXA:read-resource", "{ \"outcome\" => \"success\", \"result\" => { \"eis-product-name\" => \"ActiveMQ\", \"eis-product-version\" => \"2.0\", \"jndi-name\" => \"java:/JmsXA\", \"status\" => \"HEURISTIC\", \"type\" => \"/StateManager/AbstractRecord/XAResourceRecord\" } }", "/profile=default/subsystem=transactions/log-store=log-store:write-attribute(name=expose-all-logs, value=true)", "/host=master/server=server-one/subsystem=transactions/log-store=log-store/transactions=0\\:ffff7f000001\\:-b66efc2\\:4f9e6f8f\\:9:read-children-names(child-type=participants)", "/profile=default/subsystem=transactions/log-store=log-store/transactions=0\\:ffff7f000001\\:-b66efc2\\:4f9e6f8f\\:9:delete", "/profile=default/subsystem=transactions/log-store=log-store/transactions=0\\:ffff7f000001\\:-b66efc2\\:4f9e6f8f\\:9/participants=2:recover", "/profile=default/subsystem=transactions/log-store=log-store/transactions=0\\:ffff7f000001\\:-b66efc2\\:4f9e6f8f\\:9/participants=2:refresh", "/subsystem=transactions:read-resource(include-runtime=true)", "/subsystem=datasources/data-source=TransDS:write-attribute(name=jta, value=false)", "/subsystem=transactions:write-attribute(name=jdbc-store-datasource, value=java:jboss/datasources/TransDS)", "/subsystem=transactions:write-attribute(name=use-jdbc-store, value=true)", "/subsystem=transactions:write-attribute(name=use-journal-store,value=true)", "/profile=default/subsystem=logging/logger=com.arjuna:write-attribute(name=level,value= VALUE )", "/profile=<PROFILE NAME>/subsystem=logging/logger=org.jboss.jca:add(level=TRACE) /profile=<PROFILE NAME>/subsystem=logging/logger=org.jboss.as.connector:add(level=TRACE) /profile=<PROFILE NAME>/subsystem=logging/logger=com.arjuna:write-attribute(name=level,value=TRACE)", "/subsystem=logging/logger=org.jboss.jca:add(level=TRACE) /subsystem=logging/logger=org.jboss.as.connector:add(level=TRACE) /subsystem=logging/logger=com.arjuna:write-attribute(name=level,value=TRACE)", "/subsystem=logging/console-handler=CONSOLE:write-attribute(name=level,value=TRACE)", "<logger category=\"com.arjuna\"> <level name=\"TRACE\"/> </logger> <logger category=\"org.jboss.jca\"> <level name=\"TRACE\"/> </logger> <logger category=\"org.jboss.as.connector\"> <level name=\"TRACE\"/> </logger>", "/subsystem=logging/logger=org.jboss.jbossts.txbridge:add(level=ALL)", "<logger category=\"org.jboss.jbossts.txbridge\"> <level name=\"ALL\" /> </logger>", "/subsystem=logging/logger=com.arjuna:write-attribute(name=level,value=ALL)", "<logger category=\"com.arjuna\"> <level name=\"ALL\" /> </logger>", "com.arjuna.ats.jta.transaction.Transaction arjunaTM = (com.arjuna.ats.jta.transaction.Transaction)tx.getTransaction(); System.out.println(\"Transaction UID\" +arjunaTM.get_uid());", "// Transaction id Uid tx = new Uid(); . . . . TransactionStatusConnectionManager tscm = new TransactionStatusConnectionManager(); // Check if the transaction aborted assertEquals(tscm.getTransactionStatus(tx), ActionStatus.ABORTED);", "EAP_HOME /bin/standalone.sh -DRecoveryEnvironmentBean.transactionStatusManagerPort= NETWORK_PORT_NUMBER", "/subsystem=transactions:write-attribute(name=enable-statistics,value=true)", "public class TxStats { /** * @return the number of transactions (top-level and nested) created so far. */ public static int numberOfTransactions(); /** * @return the number of nested (sub) transactions created so far. * public static int numberOfNestedTransactions(); /** * @return the number of transactions which have terminated with heuristic * outcomes. */ public static int numberOfHeuristics(); /** * @return the number of committed transactions. */ public static int numberOfCommittedTransactions(); /** * @return the total number of transactions which have rolled back. */ public static int numberOfAbortedTransactions(); /** * @return total number of inflight (active) transactions. */ public static int numberOfInflightTransactions (); /** * @return total number of transactions rolled back due to timeout. */ public static int numberOfTimedOutTransactions (); /** * @return the number of transactions rolled back by the application. */ public static int numberOfApplicationRollbacks (); /** * @return number of transactions rolled back by participants. */ public static int numberOfResourceRollbacks (); /** * Print the current information. */ public static void printStatus(java.io.PrintWriter pw); }", "WARN ARJUNA012117 \"TransactionReaper::check timeout for TX {0} in state {1} \"", "/subsystem=transactions:write-attribute(name=default-timeout,value= VALUE )", "tar -cf logs.tar ./standalone/data/tx-object-store", "tar -xf logs.tar -C NEW_EAP_HOME", "cd USD EAP_HOME", "cp docs/examples/configs/standalone-xts.xml standalone/configuration", "bin/standalone.sh --server-config=standalone-xts.xml", "bin\\standalone.bat --server-config=standalone-xts.xml", "EAP_HOME /bin/standalone.sh -DRecoveryEnvironmentBean.expiryScanners= CLASSNAME1 , CLASSNAME2", "EAP_HOME /bin/standalone.sh -DRecoveryEnvironmentBean.expiryScanInterval= EXPIRY_SCAN_INTERVAL", "EAP_HOME /bin/standalone.sh -DRecoveryEnvironmentBean.transactionStatusManagerExpiryTime= TRANSACTION_STATUS_MANAGER_EXPIRY_TIME", "/subsystem=transactions/log-store=log-store/transactions=0\\:ffff7f000001\\:-b66efc2\\:4f9e6f8f\\:9/participants=2:read-resource", "{ \"outcome\" => \"success\", \"result\" => { \"eis-product-name\" => \"ArtemisMQ\", \"eis-product-version\" => \"2.0\", \"jndi-name\" => \"java:/JmsXA\", \"status\" => \"HEURISTIC_HAZARD\", \"type\" => \"/StateManager/AbstractRecord/XAResourceRecord\" } }", "/subsystem=transactions/log-store=log-store/transactions=0\\:ffff7f000001\\:-b66efc2\\:4f9e6f8f\\:9/participants=2:recover" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html-single/managing_transactions_on_jboss_eap/index
Chapter 127. Spring Redis
Chapter 127. Spring Redis The producer and consumer are supported. This component allows sending and receiving messages from Redis . Redis is an advanced key-value store where keys can contain strings, hashes, lists, sets and sorted sets. In addition Redis provides pub/sub functionality for inter-app communications. Camel provides a producer for executing commands, a consumer for subscribing to pub/sub messages, and an idempotent repository for filtering out duplicate messages. Prerequisites To use this component, you must have a Redis server running. 127.1. Dependencies When using spring-redis with Red Hat build of Camel Spring Boot, ensure you use the following Maven dependency to have support for auto-configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-redis-starter</artifactId> </dependency> Use the BOM to get the version. <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> 127.2. URI Format spring-redis://host:port[?options] 127.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 127.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example, a component may have security settings, credentials for authentication, URLs for network connection. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 127.3.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders , which allows you to not hardcode URLs, port numbers, sensitive information, and other settings. In other words, placeholders allow you to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 127.4. Component Options The Spring Redis component supports 4 options, which are listed below. Name Description Default Type redisTemplate (common) Autowired Reference to a pre-configured RedisTemplate instance to use. RedisTemplate bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer uses the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy, you can use this to allow CamelContext and routes to start up in situations where a producer may otherwise fail during starting and cause the route to fail being started. By starting lazy, Camel's routing error handlers handle any startup failures while routing messages. Beware that when the first message is processed, creating, and starting, the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring of JDBC data sources, JMS connection factories, and AWS Clients. true boolean 127.5. Endpoint Options The Spring Redis endpoint is configured using URI syntax: with the following path and query parameters: 127.5.1. Path Parameters (2 parameters) Name Description Default Type host (common) Required The host where the Redis server is running. String port (common) Required Redis server port number. Integer 127.5.2. Query Parameters (10 parameters) Name Description Default Type channels (common) List of topic names or name patterns to subscribe to. Multiple names can be separated by a comma. String command (common) Default command, which can be overridden by message header. Notice that the consumer only supports the following commands only: PSUBSCRIBE and SUBSCRIBE. Enum values: PING SET GET QUIT EXISTS DEL TYPE FLUSHDB KEYS RANDOMKEY RENAME RENAMENX RENAMEX DBSIZE EXPIRE EXPIREAT TTL SELECT MOVE FLUSHALL GETSET MGET SETNX SETEX MSET MSETNX DECRBY DECR INCRBY INCR APPEND SUBSTR HSET HGET HSETNX HMSET HMGET HINCRBY HEXISTS HDEL HLEN HKEYS HVALS HGETALL RPUSH LPUSH LLEN LRANGE LTRIM LINDEX LSET LREM LPOP RPOP RPOPLPUSH SADD SMEMBERS SREM SPOP SMOVE SCARD SISMEMBER SINTER SINTERSTORE SUNION SUNIONSTORE SDIFF SDIFFSTORE SRANDMEMBER ZADD ZRANGE ZREM ZINCRBY ZRANK ZREVRANK ZREVRANGE ZCARD ZSCORE MULTI DISCARD EXEC WATCH UNWATCH SORT BLPOP BRPOP AUTH SUBSCRIBE PUBLISH UNSUBSCRIBE PSUBSCRIBE PUNSUBSCRIBE ZCOUNT ZRANGEBYSCORE ZREVRANGEBYSCORE ZREMRANGEBYRANK ZREMRANGEBYSCORE ZUNIONSTORE ZINTERSTORE SAVE BGSAVE BGREWRITEAOF LASTSAVE SHUTDOWN INFO MONITOR SLAVEOF CONFIG STRLEN SYNC LPUSHX PERSIST RPUSHX ECHO LINSERT DEBUG BRPOPLPUSH SETBIT GETBIT SETRANGE GETRANGE PEXPIRE PEXPIREAT GEOADD GEODIST GEOHASH GEOPOS GEORADIUS GEORADIUSBYMEMBER SET Command connectionFactory (common) Reference to a pre-configured RedisConnectionFactory instance to use. RedisConnectionFactory redisTemplate (common) Reference to a pre-configured RedisTemplate instance to use. RedisTemplate serializer (common) Reference to a pre-configured RedisSerializer instance to use. RedisSerializer bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions that occurred while the consumer is trying to pick up incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. The consumer defaults to use the org.apache.camel.spi.ExceptionHandler to deal with exceptions. These exceptions log at WARN or ERROR level and ignored. False Boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. If you enable the bridgeErrorHandler option, this option is not used. By default, the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern listenerContainer (consumer (advanced)) Reference to a pre-configured RedisMessageListenerContainer instance to use. RedisMessageListenerContainer lazyStartProducer (Producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy, you can use this to allow CamelContext and routes to start up in situations where a producer may otherwise fail during starting and cause the route startup to fail. By deferring this startup to be lazy, the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. False Boolean 127.6. Message Headers The Spring Redis component supports 29 message header(s), which is/are listed below: Name Description Default Type CamelRedis.Command (producer) Constant: COMMAND The command to perform. String CamelRedis.Key (common) Constant: KEY The key. String CamelRedis.Keys (common) Constant: KEYS The keys. Collection CamelRedis.Field (common) Constant: FIELD The field. String CamelRedis.Fields (common) Constant: FIELDS The fields. Collection CamelRedis.Value (common) Constant: VALUE The value. Object CamelRedis.Values (common) Constant: VALUES The values. Map CamelRedis.Start (common) Constant: START Start. Long CamelRedis.End (common) Constant: END End. Long CamelRedis.Timeout (common) Constant: TIMEOUT The timeout. Long CamelRedis.Offset (common) Constant: OFFSET The offset. Long CamelRedis.Destination (common) Constant: DESTINATION The destination. String CamelRedis.Channel (common) Constant: CHANNEL The channel. byte[] or String CamelRedis.Message (common) Constant: MESSAGE The message. Object CamelRedis.Index (common) Constant: INDEX The index. Long CamelRedis.Position (common) Constant: POSITION The position. String CamelRedis.Pivot (common) Constant: PIVOT The pivot. String CamelRedis.Count (common) Constant: COUNT Count. Long CamelRedis.Timestamp (common) Constant: TIMESTAMP The timestamp. Long CamelRedis.Pattern (common) Constant: PATTERN The pattern. byte[] or String CamelRedis.Db (common) Constant: DB The db. Integer CamelRedis.Score (common) Constant: SCORE The score. Double CamelRedis.Min (common) Constant: MIN The min. Double CamelRedis.Max (common) Constant: MAX The max. Double CamelRedis.Increment (common) Constant: INCREMENT Increment. Double CamelRedis.WithScore (common) Constant: WITHSCORE WithScore. Boolean CamelRedis.Latitude (common) Constant: LATITUDE Latitude. Double CamelRedis.Longitude (common) Constant: LONGITUDE Latitude. Double CamelRedis.Radius (common) Constant: RADIUS Radius. Double 127.7. Usage Also, see the available unit tests . Redis Producer from("direct:start") .setHeader("CamelRedis.Key", constant(key)) .setHeader("CamelRedis.Value", constant(value)) .to("spring-redis://host:port?command=SET&redisTemplate=#redisTemplate"); Redis Consumer from("spring-redis://host:port?command=SUBSCRIBE&channels=myChannel") .log("Received message: USD{body}"); Note Where '//host:port' is URL address for running Redis server. 127.7.1. Message headers evaluated by the Redis producer The producer issues commands to the server and each command has a different set of parameters with specific types. The result from the command execution is returned in the message body. Hash Commands Description Parameters Result HSET Set the string value of a hash field RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELD /"CamelRedis.Field" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Void HGET Get the value of a hash field RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELD /"CamelRedis.Field" (String) String HSETNX Set the value of a hash field, only if the field does not exist RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELD /"CamelRedis.Field" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Void HMSET Set multiple hash fields to multiple values RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUES /"CamelRedis.Values" (Map<String, Object>) Void HMGET Get the values of all the given hash fields RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELDS /"CamelRedis.Filds" (Collection<String>) Collection<Object> HINCRBY Increment the integer value of a hash field by the given number RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELD /"CamelRedis.Field" (String), RedisConstants.VALUE /"CamelRedis.Value" (Long) Long HEXISTS Determine if a hash field exists RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELD /"CamelRedis.Field" (String) Boolean HDEL Delete one or more hash fields RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.FIELD /"CamelRedis.Field" (String) Void HLEN Get the number of fields in a hash RedisConstants.KEY /"CamelRedis.Key" (String) Long HKEYS Get all the fields in a hash RedisConstants.KEY /"CamelRedis.Key" (String) Set<String> HVALS Get all the values in a hash RedisConstants.KEY /"CamelRedis.Key" (String) Collection<Object> HGETALL Get all the fields and values in a hash RedisConstants.KEY /"CamelRedis.Key" (String) Map<String, Object> List Commands Description Parameters Result RPUSH Append one or multiple values to a list RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Long RPUSHX Append a value to a list, only if the list exists RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Long LPUSH Prepend one or multiple values to a list RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Long LLEN Get the length of a list RedisConstants.KEY /"CamelRedis.Key" (String) Long LRANGE Get a range of elements from a list RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"Long), RedisConstants.END /"CamelRedis.End" (Long) List<Object> LTRIM Trim a list to the specified range RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"Long), RedisConstants.END /"CamelRedis.End" (Long) Void LINDEX Get an element from a list by its index RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.INDEX /"CamelRedis.Index" (Long) String LINSERT Insert an element before or after another element in a list RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.PIVOT /"CamelRedis.Pivot" (String), RedisConstants.POSITION /"CamelRedis.Position" (String) Long LSET Set the value of an element in a list by its index RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.INDEX /"CamelRedis.Index" (Long) Void LREM Remove elements from a list RedisConstants.KEY / RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.COUNT /"CamelRedis.Count" (Long) Long LPOP Remove and get the first element in a list RedisConstants.KEY /"CamelRedis.Key" (String) Object RPOP Remove and get the last element in a list RedisConstants.KEY /"CamelRedis.Key" (String) String RPOPLPUSH Remove the last element in a list, append it to another list and return it RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Object BRPOPLPUSH Pop a value from a list, push it to another list and return it; or block until one is available RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String), RedisConstants.TIMEOUT /"CamelRedis.Timeout" (Long) Object BLPOP Remove and get the first element in a list, or block until one is available RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.TIMEOUT /"CamelRedis.Timeout" (Long) Object BRPOP Remove and get the last element in a list, or block until one is available RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.TIMEOUT /"CamelRedis.Timeout" (Long) String Set Commands Description Parameters Result SADD Add one or more members to a set RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Boolean SMEMBERS Get all the members in a set RedisConstants.KEY /"CamelRedis.Key" (String) Set<Object> SREM Remove one or more members from a set RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Boolean SPOP Remove and return a random member from a set RedisConstants.KEY /"CamelRedis.Key" (String) String SMOVE Move a member from one set to another RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Boolean SCARD Get the number of members in a set RedisConstants.KEY /"CamelRedis.Key" (String) Long SISMEMBER Determine if a given value is a member of a set RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Boolean SINTER Intersect multiple sets RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String) Set<Object> SINTERSTORE Intersect multiple sets and store the resulting set in a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Void SUNION Add multiple sets RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String) Set<Object> SUNIONSTORE Add multiple sets and store the resulting set in a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Void SDIFF Subtract multiple sets RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String) Set<Object> SDIFFSTORE Subtract multiple sets and store the resulting set in a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Void SRANDMEMBER Get one or multiple random members from a set RedisConstants.KEY /"CamelRedis.Key" (String) String Ordered set Commands Description Parameters Result ZADD Add one or more members to a sorted set, or update its score if it already exists RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.SCORE /"CamelRedis.Score" (Double) Boolean ZRANGE Return a range of members in a sorted set, by index RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"Long), RedisConstants.END /"CamelRedis.End" (Long), RedisConstants.WITHSCORE /"CamelRedis.WithScore" (Boolean) Object ZREM Remove one or more members from a sorted set RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Boolean ZINCRBY Increment the score of a member in a sorted set RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.INCREMENT /"CamelRedis.Increment" (Double) Double ZRANK Determine the index of a member in a sorted set RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Long ZREVRANK Determine the index of a member in a sorted set, with scores ordered from high to low RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Long ZREVRANGE Return a range of members in a sorted set, by index, with scores ordered from high to low RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"Long), RedisConstants.END /"CamelRedis.End" (Long), RedisConstants.WITHSCORE /"CamelRedis.WithScore" (Boolean) Object ZCARD Get the number of members in a sorted set RedisConstants.KEY /"CamelRedis.Key" (String) Long ZCOUNT Count the members in a sorted set with scores within the given values RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.MIN /"CamelRedis.Min" (Double), RedisConstants.MAX /"CamelRedis.Max" (Double) Long ZRANGEBYSCORE Return a range of members in a sorted set, by score RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.MIN /"CamelRedis.Min" (Double), RedisConstants.MAX /"CamelRedis.Max" (Double) Set<Object> ZREVRANGEBYSCORE Return a range of members in a sorted set, by score, with scores ordered from high to low RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.MIN /"CamelRedis.Min" (Double), RedisConstants.MAX /"CamelRedis.Max" (Double) Set<Object> ZREMRANGEBYRANK Remove all members in a sorted set within the given indexes RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"(Long), RedisConstants.END /"CamelRedis.End" (Long) Void ZREMRANGEBYSCORE Remove all members in a sorted set within the given scores RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"(Long), RedisConstants.END /"CamelRedis.End" (Long) Void ZUNIONSTORE Add multiple sorted sets and store the resulting sorted set in a new key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Void ZINTERSTORE Intersect multiple sorted sets and store the resulting sorted set in a new key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.KEYS /"CamelRedis.Keys" (String), RedisConstants.DESTINATION /"CamelRedis.Destination" (String) Void String Commands Description Parameters Result SET Set the string value of a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Void GET Get the value of a key RedisConstants.KEY /"CamelRedis.Key" (String) Object STRLEN Get the length of the value stored in a key RedisConstants.KEY /"CamelRedis.Key" (String) Long APPEND Append a value to a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (String) Integer SETBIT Sets or clears the bit at offset in the string value stored at key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.OFFSET /"CamelRedis.Offset" (Long), RedisConstants.VALUE /"CamelRedis.Value" (Boolean) Void GETBIT Returns the bit value at offset in the string value stored at key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.OFFSET /"CamelRedis.Offset" (Long) Boolean SETRANGE Overwrite part of a string at key starting at the specified offset RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.OFFSET /"CamelRedis.Offset" (Long) Void GETRANGE Get a substring of the string stored at a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.START /"CamelRedis.Start"(Long), RedisConstants.END /"CamelRedis.End" (Long) String SETNX Set the value of a key, only if the key does not exist RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Boolean SETEX Set the value and expiration of a key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.TIMEOUT /"CamelRedis.Timeout" (Long), SECONDS Void DECRBY Decrement the integer value of a key by the given number RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Long) Long DECR Decrement the integer value of a key by one RedisConstants.KEY /"CamelRedis.Key" (String), Long INCRBY Increment the integer value of a key by the given amount RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Long) Long INCR Increment the integer value of a key by one RedisConstants.KEY /"CamelRedis.Key" (String) Long MGET Get the values of all the given keys RedisConstants.FIELDS /"CamelRedis.Filds" (Collection<String>) List<Object> MSET Set multiple keys to multiple values RedisConstants.VALUES /"CamelRedis.Values" (Map<String, Object>) Void MSETNX Set multiple keys to multiple values, only if none of the keys exist RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Void GETSET Set the string value of a key and return its old value RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) Object Key Commands Description Parameters Result EXISTS Determine if a key exists RedisConstants.KEY /"CamelRedis.Key" (String) Boolean DEL Delete a key RedisConstants.KEYS /"CamelRedis.Keys" (String) Void TYPE Determine the type stored at key RedisConstants.KEY /"CamelRedis.Key" (String) DataType KEYS Find all keys matching the given pattern RedisConstants.PATERN /"CamelRedis.Pattern" (String) Collection<String> RANDOMKEY Return a random key from the keyspace RedisConstants.PATERN /"CamelRedis.Pattern" (String), RedisConstants.VALUE /"CamelRedis.Value" (String) String RENAME Rename a key RedisConstants.KEY /"CamelRedis.Key" (String) Void RENAMENX Rename a key, only if the new key does not exist RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (String) Boolean EXPIRE Set a key's time to live in seconds RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.TIMEOUT /"CamelRedis.Timeout" (Long) Boolean SORT Sort the elements in a list, set or sorted set RedisConstants.KEY /"CamelRedis.Key" (String) List<Object> PERSIST Remove the expiration from a key RedisConstants.KEY /"CamelRedis.Key" (String) Boolean EXPIREAT Set the expiration for a key as a UNIX timestamp RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.TIMESTAMP /"CamelRedis.Timestamp" (Long) Boolean PEXPIRE Set a key's time to live in milliseconds RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.TIMEOUT /"CamelRedis.Timeout" (Long) Boolean PEXPIREAT Set the expiration for a key as a UNIX timestamp specified in milliseconds RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.TIMESTAMP /"CamelRedis.Timestamp" (Long) Boolean TTL Get the time to live for a key RedisConstants.KEY /"CamelRedis.Key" (String) Long MOVE Move a key to another database RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.DB /"CamelRedis.Db" (Integer) Boolean Geo Commands Description Parameters Result GEOADD Adds the specified geospatial items (latitude, longitude, name) to the specified key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.LATITUDE /"CamelRedis.Latitude" (Double), RedisConstants.LONGITUDE /"CamelRedis.Longitude" (Double), RedisConstants.VALUE /"CamelRedis.Value" (Object) Long GEODIST Return the distance between two members in the geospatial index for the specified key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUES /"CamelRedis.Values" (Object[]) Distance GEOHASH Return valid Geohash strings representing the position of an element in the geospatial index for the specified key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) List<String> GEOPOS Return the positions (longitude, latitude) of an element in the geospatial index for the specified key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object) List<Point> GEORADIUS Return the element in the geospatial index for the specified key, which is within the borders of the area specified with the central location and the maximum distance from the center (the radius) RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.LATITUDE /"CamelRedis.Latitude" (Double), RedisConstants.LONGITUDE /"CamelRedis.Longitude" (Double), RedisConstants.RADIUS /"CamelRedis.Radius" (Double), RedisConstants.COUNT /"CamelRedis.Count" (Integer) GeoResults GEORADIUSBYMEMBER This command is exactly like GEORADIUS with the sole difference that instead of taking, as the center of the area to query, a longitude and latitude value, it takes the name of a member already existing inside the geospatial index for the specified key RedisConstants.KEY /"CamelRedis.Key" (String), RedisConstants.VALUE /"CamelRedis.Value" (Object), RedisConstants.RADIUS /"CamelRedis.Radius" (Double), RedisConstants.COUNT /"CamelRedis.Count" (Integer) GeoResults Other Commands Description Parameters Result MULTI Mark the start of a transaction block none Void DISCARD Discard all commands issued after MULTI none Void EXEC Execute all commands issued after MULTI none Void WATCH Watch the given keys to determine the execution of the MULTI/EXEC block RedisConstants.KEYS /"CamelRedis.Keys" (String) Void UNWATCH Forget about all watched keys none Void ECHO Echo the given string RedisConstants.VALUE /"CamelRedis.Value" (String) String PING Ping the server none String QUIT Close the connection none Void PUBLISH Post a message to a channel RedisConstants.CHANNEL /"CamelRedis.Channel" (String), RedisConstants.MESSAGE /"CamelRedis.Message" (Object) Void 127.8. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.spring-redis.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatically configure JDBC data sources, JMS connection factories, AWS Clients, etc. True Boolean camel.component.spring-redis.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions that occur while the consumer is trying to pick up incoming messages, or the likes, will be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. False Boolean camel.component.spring-redis.enabled Whether to enable auto configuration of the spring-redis component. This is enabled by default. Boolean camel.component.spring-redis.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy, you can allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail start up. By deferring this startup to be lazy, the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. False Boolean camel.component.spring-redis.redis-template Reference to a pre-configured RedisTemplate instance to use. The option is an org.springframework.data.redis.core.RedisTemplate type. RedisTemplate
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-redis-starter</artifactId> </dependency>", "<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>USD{camel-spring-boot-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>", "spring-redis://host:port[?options]", "spring-redis:host:port", "from(\"direct:start\") .setHeader(\"CamelRedis.Key\", constant(key)) .setHeader(\"CamelRedis.Value\", constant(value)) .to(\"spring-redis://host:port?command=SET&redisTemplate=#redisTemplate\");", "from(\"spring-redis://host:port?command=SUBSCRIBE&channels=myChannel\") .log(\"Received message: USD{body}\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-spring-redis-component-starter
Chapter 3. Networking Operators overview
Chapter 3. Networking Operators overview OpenShift Container Platform supports multiple types of networking Operators. You can manage the cluster networking using these networking Operators. 3.1. Cluster Network Operator The Cluster Network Operator (CNO) deploys and manages the cluster network components in an OpenShift Container Platform cluster. This includes deployment of the Container Network Interface (CNI) default network provider plug-in selected for the cluster during installation. For more information, see Cluster Network Operator in OpenShift Container Platform . 3.2. DNS Operator The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods. This enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. For more information, see DNS Operator in OpenShift Container Platform . 3.3. Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to external clients. The Ingress Operator implements the Ingress Controller API and is responsible for enabling external access to OpenShift Container Platform cluster services. For more information, see Ingress Operator in OpenShift Container Platform .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/networking-operators-overview
Chapter 11. Troubleshooting
Chapter 11. Troubleshooting This section describes resources for troubleshooting the Migration Toolkit for Containers (MTC). For known issues, see the MTC release notes . 11.1. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.12 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. About MTC custom resources The Migration Toolkit for Containers (MTC) creates the following custom resources (CRs): MigCluster (configuration, MTC cluster): Cluster definition MigStorage (configuration, MTC cluster): Storage definition MigPlan (configuration, MTC cluster): Migration plan The MigPlan CR describes the source and target clusters, replication repository, and namespaces being migrated. It is associated with 0, 1, or many MigMigration CRs. Note Deleting a MigPlan CR deletes the associated MigMigration CRs. BackupStorageLocation (configuration, MTC cluster): Location of Velero backup objects VolumeSnapshotLocation (configuration, MTC cluster): Location of Velero volume snapshots MigMigration (action, MTC cluster): Migration, created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR. Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster: Backup CR #1 for Kubernetes objects Backup CR #2 for PV data Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster: Restore CR #1 (using Backup CR #2) for PV data Restore CR #2 (using Backup CR #1) for Kubernetes objects 11.2. MTC custom resource manifests Migration Toolkit for Containers (MTC) uses the following custom resource (CR) manifests for migrating applications. 11.2.1. DirectImageMigration The DirectImageMigration CR copies images directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2 1 One or more namespaces containing images to be migrated. By default, the destination namespace has the same name as the source namespace. 2 Source namespace mapped to a destination namespace with a different name. 11.2.2. DirectImageStreamMigration The DirectImageStreamMigration CR copies image stream references directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace> 11.2.3. DirectVolumeMigration The DirectVolumeMigration CR copies persistent volumes (PVs) directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration 1 Set to true to create namespaces for the PVs on the destination cluster. 2 Set to true to delete DirectVolumeMigrationProgress CRs after migration. The default is false so that DirectVolumeMigrationProgress CRs are retained for troubleshooting. 3 Update the cluster name if the destination cluster is not the host cluster. 4 Specify one or more PVCs to be migrated. 11.2.4. DirectVolumeMigrationProgress The DirectVolumeMigrationProgress CR shows the progress of the DirectVolumeMigration CR. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration 11.2.5. MigAnalytic The MigAnalytic CR collects the number of images, Kubernetes resources, and the persistent volume (PV) capacity from an associated MigPlan CR. You can configure the data that it collects. apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration 1 Optional: Returns the number of images. 2 Optional: Returns the number, kind, and API version of the Kubernetes resources. 3 Optional: Returns the PV capacity. 4 Returns a list of image names. The default is false so that the output is not excessively long. 5 Optional: Specify the maximum number of image names to return if listImages is true . 11.2.6. MigCluster The MigCluster CR defines a host, local, or remote cluster. apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: "1.0" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 # The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 # The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 # The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config 1 Update the cluster name if the migration-controller pod is not running on this cluster. 2 The migration-controller pod runs on this cluster if true . 3 Microsoft Azure only: Specify the resource group. 4 Optional: If you created a certificate bundle for self-signed CA certificates and if the insecure parameter value is false , specify the base64-encoded certificate bundle. 5 Set to true to disable SSL verification. 6 Set to true to validate the cluster. 7 Set to true to restart the Restic pods on the source cluster after the Stage pods are created. 8 Remote cluster and direct image migration only: Specify the exposed secure registry path. 9 Remote cluster only: Specify the URL. 10 Remote cluster only: Specify the name of the Secret object. 11.2.7. MigHook The MigHook CR defines a migration hook that runs custom code at a specified stage of the migration. You can create up to four migration hooks. Each hook runs during a different phase of the migration. You can configure the hook name, runtime duration, a custom image, and the cluster where the hook will run. The migration phases and namespaces of the hooks are configured in the MigPlan CR. apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7 1 Optional: A unique hash is appended to the value for this parameter so that each migration hook has a unique name. You do not need to specify the value of the name parameter. 2 Specify the migration hook name, unless you specify the value of the generateName parameter. 3 Optional: Specify the maximum number of seconds that a hook can run. The default is 1800 . 4 The hook is a custom image if true . The custom image can include Ansible or it can be written in a different programming language. 5 Specify the custom image, for example, quay.io/konveyor/hook-runner:latest . Required if custom is true . 6 Base64-encoded Ansible playbook. Required if custom is false . 7 Specify the cluster on which the hook will run. Valid values are source or destination . 11.2.8. MigMigration The MigMigration CR runs a MigPlan CR. You can configure a Migmigration CR to run a stage or incremental migration, to cancel a migration in progress, or to roll back a completed migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration 1 Set to true to cancel a migration in progress. 2 Set to true to roll back a completed migration. 3 Set to true to run a stage migration. Data is copied incrementally and the pods on the source cluster are not stopped. 4 Set to true to stop the application during migration. The pods on the source cluster are scaled to 0 after the Backup stage. 5 Set to true to retain the labels and annotations applied during the migration. 6 Set to true to check the status of the migrated pods on the destination cluster are checked and to return the names of pods that are not in a Running state. 11.2.9. MigPlan The MigPlan CR defines the parameters of a migration plan. You can configure destination namespaces, hook phases, and direct or indirect migration. Note By default, a destination namespace has the same name as the source namespace. If you configure a different destination namespace, you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges are copied during migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: "1.0" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12 1 The migration has completed if true . You cannot create another MigMigration CR for this MigPlan CR. 2 Optional: You can specify up to four migration hooks. Each hook must run during a different migration phase. 3 Optional: Specify the namespace in which the hook will run. 4 Optional: Specify the migration phase during which a hook runs. One hook can be assigned to one phase. Valid values are PreBackup , PostBackup , PreRestore , and PostRestore . 5 Optional: Specify the name of the MigHook CR. 6 Optional: Specify the namespace of MigHook CR. 7 Optional: Specify a service account with cluster-admin privileges. 8 Direct image migration is disabled if true . Images are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 9 Direct volume migration is disabled if true . PVs are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 10 Specify one or more source namespaces. If you specify only the source namespace, the destination namespace is the same. 11 Specify the destination namespace if it is different from the source namespace. 12 The MigPlan CR is validated if true . 11.2.10. MigStorage The MigStorage CR describes the object storage for the replication repository. Amazon Web Services (AWS), Microsoft Azure, Google Cloud Storage, Multi-Cloud Object Gateway, and generic S3-compatible cloud storage are supported. AWS and the snapshot copy method have additional parameters. apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: "1.0" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11 1 Specify the storage provider. 2 Snapshot copy method only: Specify the storage provider. 3 AWS only: Specify the bucket name. 4 AWS only: Specify the bucket region, for example, us-east-1 . 5 Specify the name of the Secret object that you created for the storage. 6 AWS only: If you are using the AWS Key Management Service, specify the unique identifier of the key. 7 AWS only: If you granted public access to the AWS bucket, specify the bucket URL. 8 AWS only: Specify the AWS signature version for authenticating requests to the bucket, for example, 4 . 9 Snapshot copy method only: Specify the geographical region of the clusters. 10 Snapshot copy method only: Specify the name of the Secret object that you created for the storage. 11 Set to true to validate the cluster. 11.3. Logs and debugging tools This section describes logs and debugging tools that you can use for troubleshooting. 11.3.1. Viewing migration plan resources You can view migration plan resources to monitor a running migration or to troubleshoot a failed migration by using the MTC web console and the command line interface (CLI). Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan to view the Migrations page. Click a migration to view the Migration details . Expand Migration resources to view the migration resources and their status in a tree view. Note To troubleshoot a failed migration, start with a high-level resource that has failed and then work down the resource tree towards the lower-level resources. Click the Options menu to a resource and select one of the following options: Copy oc describe command copies the command to your clipboard. Log in to the relevant cluster and then run the command. The conditions and events of the resource are displayed in YAML format. Copy oc logs command copies the command to your clipboard. Log in to the relevant cluster and then run the command. If the resource supports log filtering, a filtered log is displayed. View JSON displays the resource data in JSON format in a web browser. The data is the same as the output for the oc get <resource> command. 11.3.2. Viewing a migration plan log You can view an aggregated log for a migration plan. You use the MTC web console to copy a command to your clipboard and then run the command from the command line interface (CLI). The command displays the filtered logs of the following pods: Migration Controller Velero Restic Rsync Stunnel Registry Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan. Click View logs . Click the Copy icon to copy the oc logs command to your clipboard. Log in to the relevant cluster and enter the command on the CLI. The aggregated log for the migration plan is displayed. 11.3.3. Using the migration log reader You can use the migration log reader to display a single filtered view of all the migration logs. Procedure Get the mig-log-reader pod: USD oc -n openshift-migration get pods | grep log Enter the following command to display a single migration log: USD oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1 1 The -c plain option displays the log without colors. 11.3.4. Accessing performance metrics The MigrationController custom resource (CR) records metrics and pulls them into on-cluster monitoring storage. You can query the metrics by using Prometheus Query Language (PromQL) to diagnose migration performance issues. All metrics are reset when the Migration Controller pod restarts. You can access the performance metrics and run queries by using the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, click Observe Metrics . Enter a PromQL query, select a time window to display, and click Run Queries . If your web browser does not display all the results, use the Prometheus console. 11.3.4.1. Provided metrics The MigrationController custom resource (CR) provides metrics for the MigMigration CR count and for its API requests. 11.3.4.1.1. cam_app_workload_migrations This metric is a count of MigMigration CRs over time. It is useful for viewing alongside the mtc_client_request_count and mtc_client_request_elapsed metrics to collate API request information with migration status changes. This metric is included in Telemetry. Table 11.1. cam_app_workload_migrations metric Queryable label name Sample label values Label description status running , idle , failed , completed Status of the MigMigration CR type stage, final Type of the MigMigration CR 11.3.4.1.2. mtc_client_request_count This metric is a cumulative count of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 11.2. mtc_client_request_count metric Queryable label name Sample label values Label description cluster https://migcluster-url:443 Cluster that the request was issued against component MigPlan , MigCluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes kind the request was issued for 11.3.4.1.3. mtc_client_request_elapsed This metric is a cumulative latency, in milliseconds, of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 11.3. mtc_client_request_elapsed metric Queryable label name Sample label values Label description cluster https://cluster-url.com:443 Cluster that the request was issued against component migplan , migcluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes resource that the request was issued for 11.3.4.1.4. Useful queries The table lists some helpful queries that can be used for monitoring performance. Table 11.4. Useful queries Query Description mtc_client_request_count Number of API requests issued, sorted by request type sum(mtc_client_request_count) Total number of API requests issued mtc_client_request_elapsed API request latency, sorted by request type sum(mtc_client_request_elapsed) Total latency of API requests sum(mtc_client_request_elapsed) / sum(mtc_client_request_count) Average latency of API requests mtc_client_request_elapsed / mtc_client_request_count Average latency of API requests, sorted by request type cam_app_workload_migrations{status="running"} * 100 Count of running migrations, multiplied by 100 for easier viewing alongside request counts 11.3.5. Using the must-gather tool You can collect logs, metrics, and information about MTC custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can collect data for a one-hour or a 24-hour period and view the data with the Prometheus console. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: To collect data for the past 24 hours, run the following command: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 This command saves the data as the must-gather/must-gather.tar.gz file. You can upload this file to a support case on the Red Hat Customer Portal . To collect data for the past 24 hours, run the following command: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 -- /usr/bin/gather_metrics_dump This operation can take a long time. This command saves the data as the must-gather/metrics/prom_data.tar.gz file. 11.3.6. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql The following types of restore errors and warnings are shown in the output of a velero describe request: Velero : A list of messages related to the operation of Velero itself, for example, messages related to connecting to the cloud, reading a backup file, and so on Cluster : A list of messages related to backing up or restoring cluster-scoped resources Namespaces : A list of list of messages related to backing up or restoring resources stored in namespaces One or more errors in one of these categories results in a Restore operation receiving the status of PartiallyFailed and not Completed . Warnings do not lead to a change in the completion status. Important For resource-specific errors, that is, Cluster and Namespaces errors, the restore describe --details output includes a resource list that lists all resources that Velero succeeded in restoring. For any resource that has such an error, check to see if the resource is actually in the cluster. If there are Velero errors, but no resource-specific errors, in the output of a describe command, it is possible that the restore completed without any actual problems in restoring workloads, but carefully validate post-restore applications. For example, if the output contains PodVolumeRestore or node agent-related errors, check the status of PodVolumeRestores and DataDownloads . If none of these are failed or still running, then volume data might have been fully restored. Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 11.3.7. Debugging a partial migration failure You can debug a partial migration failure warning message by using the Velero CLI to examine the Restore custom resource (CR) logs. A partial failure occurs when Velero encounters an issue that does not cause a migration to fail. For example, if a custom resource definition (CRD) is missing or if there is a discrepancy between CRD versions on the source and target clusters, the migration completes but the CR is not created on the target cluster. Velero logs the issue as a partial failure and then processes the rest of the objects in the Backup CR. Procedure Check the status of a MigMigration CR: USD oc get migmigration <migmigration> -o yaml Example output status: conditions: - category: Warn durable: true lastTransitionTime: "2021-01-26T20:48:40Z" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: "True" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: "2021-01-26T20:48:42Z" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: "True" type: SucceededWithWarnings Check the status of the Restore CR by using the Velero describe command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore describe <restore> Example output Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource Check the Restore CR logs by using the Velero logs command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore logs <restore> Example output time="2021-01-26T20:48:37Z" level=info msg="Attempting to restore migration-example: migration-example" logSource="pkg/restore/restore.go:1107" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time="2021-01-26T20:48:37Z" level=info msg="error restoring migration-example: the server could not find the requested resource" logSource="pkg/restore/restore.go:1170" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf The Restore CR log error message, the server could not find the requested resource , indicates the cause of the partially failed migration. 11.3.8. Using MTC custom resources for troubleshooting You can check the following Migration Toolkit for Containers (MTC) custom resources (CRs) to troubleshoot a failed migration: MigCluster MigStorage MigPlan BackupStorageLocation The BackupStorageLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 VolumeSnapshotLocation The VolumeSnapshotLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 MigMigration Backup MTC changes the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup CR contains an openshift.io/orig-reclaim-policy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Restore Procedure List the MigMigration CRs in the openshift-migration namespace: USD oc get migmigration -n openshift-migration Example output NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s Inspect the MigMigration CR: USD oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration The output is similar to the following examples. MigMigration example output name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none> Velero backup CR #2 example output that describes the PV data apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: "2019-08-29T01:03:15Z" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: "87313" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: "2019-08-29T01:02:36Z" errors: 0 expiration: "2019-09-28T01:02:35Z" phase: Completed startTimestamp: "2019-08-29T01:02:35Z" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0 Velero restore CR #2 example output that describes the Kubernetes resources apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: "2019-08-28T00:09:49Z" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: "82329" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: "" phase: Completed validationErrors: null warnings: 15 11.4. Common issues and concerns This section describes common issues and concerns that can cause issues during migration. 11.4.1. Direct volume migration does not complete If direct volume migration does not complete, the target cluster might not have the same node-selector annotations as the source cluster. Migration Toolkit for Containers (MTC) migrates namespaces with all annotations to preserve security context constraints and scheduling requirements. During direct volume migration, MTC creates Rsync transfer pods on the target cluster in the namespaces that were migrated from the source cluster. If a target cluster namespace does not have the same annotations as the source cluster namespace, the Rsync transfer pods cannot be scheduled. The Rsync pods remain in a Pending state. You can identify and fix this issue by performing the following procedure. Procedure Check the status of the MigMigration CR: USD oc describe migmigration <pod> -n openshift-migration The output includes the following status message: Example output Some or all transfer pods are not running for more than 10 mins on destination cluster On the source cluster, obtain the details of a migrated namespace: USD oc get namespace <namespace> -o yaml 1 1 Specify the migrated namespace. On the target cluster, edit the migrated namespace: USD oc edit namespace <namespace> Add the missing openshift.io/node-selector annotations to the migrated namespace as in the following example: apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "region=east" ... Run the migration plan again. 11.4.2. Error messages and resolutions This section describes common error messages you might encounter with the Migration Toolkit for Containers (MTC) and how to resolve their underlying causes. 11.4.2.1. CA certificate error displayed when accessing the MTC console for the first time If a CA certificate error message is displayed the first time you try to access the MTC console, the likely cause is the use of self-signed CA certificates in one of the clusters. To resolve this issue, navigate to the oauth-authorization-server URL displayed in the error message and accept the certificate. To resolve this issue permanently, add the certificate to the trust store of your web browser. If an Unauthorized message is displayed after you have accepted the certificate, navigate to the MTC console and refresh the web page. 11.4.2.2. OAuth timeout error in the MTC console If a connection has timed out message is displayed in the MTC console after you have accepted a self-signed certificate, the causes are likely to be the following: Interrupted network access to the OAuth server Interrupted network access to the OpenShift Container Platform console Proxy configuration that blocks access to the oauth-authorization-server URL. See MTC console inaccessible because of OAuth timeout error for details. To determine the cause of the timeout: Inspect the MTC console web page with a browser web inspector. Check the Migration UI pod log for errors. 11.4.2.3. Certificate signed by unknown authority error If you use a self-signed certificate to secure a cluster or a replication repository for the Migration Toolkit for Containers (MTC), certificate verification might fail with the following error message: Certificate signed by unknown authority . You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository. Procedure Download a CA certificate from a remote endpoint and save it as a CA bundle file: USD echo -n | openssl s_client -connect <host_FQDN>:<port> \ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2 1 Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443 . 2 Specify the name of the CA bundle file. 11.4.2.4. Backup storage location errors in the Velero pod log If a Velero Backup custom resource contains a reference to a backup storage location (BSL) that does not exist, the Velero pod log might display the following error messages: USD oc logs <Velero_Pod> -n openshift-migration Example output level=error msg="Error checking repository for stale locks" error="error getting backup storage location: BackupStorageLocation.velero.io \"ts-dpa-1\" not found" error.file="/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259" You can ignore these error messages. A missing BSL cannot cause a migration to fail. 11.4.2.5. Pod volume backup timeout error in the Velero pod log If a migration fails because Restic times out, the following error is displayed in the Velero pod log. level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1 The default value of restic_timeout is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages. Procedure In the OpenShift Container Platform web console, navigate to Operators Installed Operators . Click Migration Toolkit for Containers Operator . In the MigrationController tab, click migration-controller . In the YAML tab, update the following parameter value: spec: restic_timeout: 1h 1 1 Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s . Click Save . 11.4.2.6. Restic verification errors in the MigMigration custom resource If data verification fails when migrating a persistent volume with the file system data copy method, the following error is displayed in the MigMigration CR. Example output status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: "True" type: ResticVerifyErrors 2 1 The error message identifies the Restore CR name. 2 ResticVerifyErrors is a general error warning type that includes verification errors. Note A data verification error does not cause the migration process to fail. You can check the Restore CR to identify the source of the data verification error. Procedure Log in to the target cluster. View the Restore CR: USD oc describe <registry-example-migration-rvwcm> -n openshift-migration The output identifies the persistent volume with PodVolumeRestore errors. Example output status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration View the PodVolumeRestore CR: USD oc describe <migration-example-rvwcm-98t49> The output identifies the Restic pod that logged the errors. Example output completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 ... resticPod: <restic-nr2v5> View the Restic pod log to locate the errors: USD oc logs -f <restic-nr2v5> 11.4.2.7. Restic permission error when migrating from NFS storage with root_squash enabled If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody and does not have permission to perform the migration. The following error is displayed in the Restic pod log. Example output backup=openshift-migration/<backup_id> controller=pod-volume-backup error="fork/exec /usr/bin/restic: permission denied" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280" error.function="github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup" logSource="pkg/controller/pod_volume_backup_controller.go:280" name=<backup_id> namespace=openshift-migration You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the MigrationController CR manifest. Procedure Create a supplemental group for Restic on the NFS storage. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the restic_supplemental_groups parameter to the MigrationController CR manifest on the source and target clusters: spec: restic_supplemental_groups: <group_id> 1 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 11.4.3. Applying the Skip SELinux relabel workaround with spc_t automatically on workloads running on OpenShift Container Platform When attempting to migrate a namespace with Migration Toolkit for Containers (MTC) and a substantial volume associated with it, the rsync-server may become frozen without any further information to troubleshoot the issue. 11.4.3.1. Diagnosing the need for the Skip SELinux relabel workaround Search for an error of Unable to attach or mount volumes for pod... timed out waiting for the condition in the kubelet logs from the node where the rsync-server for the Direct Volume Migration (DVM) runs. Example kubelet log kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29 11.4.3.2. Resolving using the Skip SELinux relabel workaround To resolve this issue, set the migration_rsync_super_privileged parameter to true in both the source and destination MigClusters using the MigrationController custom resource (CR). Example MigrationController CR apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: "" cluster_name: host mig_namespace_limit: "10" mig_pod_limit: "100" mig_pv_limit: "100" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3 1 The value of the migration_rsync_super_privileged parameter indicates whether or not to run Rsync Pods as super privileged containers ( spc_t selinux context ). Valid settings are true or false . 11.5. Rolling back a migration You can roll back a migration by using the MTC web console or the CLI. You can also roll back a migration manually . 11.5.1. Rolling back a migration by using the MTC web console You can roll back a migration by using the Migration Toolkit for Containers (MTC) web console. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure In the MTC web console, click Migration plans . Click the Options menu beside a migration plan and select Rollback under Migration . Click Rollback and wait for rollback to complete. In the migration plan details, Rollback succeeded is displayed. Verify that rollback was successful in the OpenShift Container Platform web console of the source cluster: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volume is correctly provisioned. 11.5.2. Rolling back a migration from the command line interface You can roll back a migration by creating a MigMigration custom resource (CR) from the command line interface. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure Create a MigMigration CR based on the following example: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: ... rollback: true ... migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF 1 Specify the name of the associated MigPlan CR. In the MTC web console, verify that the migrated project resources have been removed from the target cluster. Verify that the migrated project resources are present in the source cluster and that the application is running. 11.5.3. Rolling back a migration manually You can roll back a failed migration manually by deleting the stage pods and unquiescing the application. If you run the same migration plan successfully, the resources from the failed migration are deleted automatically. Note The following resources remain in the migrated namespaces after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. Procedure Delete the stage pods on all clusters: USD oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1 1 Namespaces specified in the MigPlan CR. Unquiesce the application on the source cluster by scaling the replicas to their premigration number: USD oc scale deployment <deployment> --replicas=<premigration_replicas> The migration.openshift.io/preQuiesceReplicas annotation in the Deployment CR displays the premigration number of replicas: apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" migration.openshift.io/preQuiesceReplicas: "1" Verify that the application pods are running on the source cluster: USD oc get pod -n <namespace> Additional resources Deleting Operators from a cluster using the web console
[ "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config", "apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12", "apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11", "oc -n openshift-migration get pods | grep log", "oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 -- /usr/bin/gather_metrics_dump", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "oc get migmigration <migmigration> -o yaml", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>", "Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>", "time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "oc get migmigration -n openshift-migration", "NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s", "oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration", "name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0", "apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15", "oc describe migmigration <pod> -n openshift-migration", "Some or all transfer pods are not running for more than 10 mins on destination cluster", "oc get namespace <namespace> -o yaml 1", "oc edit namespace <namespace>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"", "echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2", "oc logs <Velero_Pod> -n openshift-migration", "level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"", "level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1", "spec: restic_timeout: 1h 1", "status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2", "oc describe <registry-example-migration-rvwcm> -n openshift-migration", "status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration", "oc describe <migration-example-rvwcm-98t49>", "completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>", "oc logs -f <restic-nr2v5>", "backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration", "spec: restic_supplemental_groups: <group_id> 1", "kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF", "oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1", "oc scale deployment <deployment> --replicas=<premigration_replicas>", "apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"", "oc get pod -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/migration_toolkit_for_containers/troubleshooting-mtc
Chapter 2. Understanding Operators
Chapter 2. Understanding Operators 2.1. What are Operators? Conceptually, Operators take human operational knowledge and encode it into software that is more easily shared with consumers. Operators are pieces of software that ease the operational complexity of running another piece of software. They act like an extension of the software vendor's engineering team, monitoring a Kubernetes environment (such as OpenShift Container Platform) and using its current state to make decisions in real time. Advanced Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, like skipping a software backup process to save time. More technically, Operators are a method of packaging, deploying, and managing a Kubernetes application. A Kubernetes application is an app that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl or oc tooling. To be able to make the most of Kubernetes, you require a set of cohesive APIs to extend in order to service and manage your apps that run on Kubernetes. Think of Operators as the runtime that manages this type of app on Kubernetes. 2.1.1. Why use Operators? Operators provide: Repeatability of installation and upgrade. Constant health checks of every system component. Over-the-air (OTA) updates for OpenShift components and ISV content. A place to encapsulate knowledge from field engineers and spread it to all users, not just one or two. Why deploy on Kubernetes? Kubernetes (and by extension, OpenShift Container Platform) contains all of the primitives needed to build complex distributed systems - secret handling, load balancing, service discovery, autoscaling - that work across on-premises and cloud providers. Why manage your app with Kubernetes APIs and kubectl tooling? These APIs are feature rich, have clients for all platforms and plug into the cluster's access control/auditing. An Operator uses the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom object, for example MongoDB , looks and acts just like the built-in, native Kubernetes objects. How do Operators compare with service brokers? A service broker is a step towards programmatic discovery and deployment of an app. However, because it is not a long running process, it cannot execute Day 2 operations like upgrade, failover, or scaling. Customizations and parameterization of tunables are provided at install time, versus an Operator that is constantly watching the current state of your cluster. Off-cluster services are a good match for a service broker, although Operators exist for these as well. 2.1.2. Operator Framework The Operator Framework is a family of tools and capabilities to deliver on the customer experience described above. It is not just about writing code; testing, delivering, and updating Operators is just as important. The Operator Framework components consist of open source tools to tackle these problems: Operator SDK The Operator SDK assists Operator authors in bootstrapping, building, testing, and packaging their own Operator based on their expertise without requiring knowledge of Kubernetes API complexities. Operator Lifecycle Manager Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. It is deployed by default in OpenShift Container Platform 4.13. Operator Registry The Operator Registry stores cluster service versions (CSVs) and custom resource definitions (CRDs) for creation in a cluster and stores Operator metadata about packages and channels. It runs in a Kubernetes or OpenShift cluster to provide this Operator catalog data to OLM. OperatorHub OperatorHub is a web console for cluster administrators to discover and select Operators to install on their cluster. It is deployed by default in OpenShift Container Platform. These tools are designed to be composable, so you can use any that are useful to you. 2.1.3. Operator maturity model The level of sophistication of the management logic encapsulated within an Operator can vary. This logic is also in general highly dependent on the type of the service represented by the Operator. One can however generalize the scale of the maturity of the encapsulated operations of an Operator for certain set of capabilities that most Operators can include. To this end, the following Operator maturity model defines five phases of maturity for generic Day 2 operations of an Operator: Figure 2.1. Operator maturity model The above model also shows how these capabilities can best be developed through the Helm, Go, and Ansible capabilities of the Operator SDK. 2.2. Operator Framework packaging format This guide outlines the packaging format for Operators supported by Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.2.1. Bundle format The bundle format for Operators is a packaging format introduced by the Operator Framework. To improve scalability and to better enable upstream users hosting their own catalogs, the bundle format specification simplifies the distribution of Operator metadata. An Operator bundle represents a single version of an Operator. On-disk bundle manifests are containerized and shipped as a bundle image , which is a non-runnable container image that stores the Kubernetes manifests and Operator metadata. Storage and distribution of the bundle image is then managed using existing container tools like podman and docker and container registries such as Quay. Operator metadata can include: Information that identifies the Operator, for example its name and version. Additional information that drives the UI, for example its icon and some example custom resources (CRs). Required and provided APIs. Related images. When loading manifests into the Operator Registry database, the following requirements are validated: The bundle must have at least one channel defined in the annotations. Every bundle has exactly one cluster service version (CSV). If a CSV owns a custom resource definition (CRD), that CRD must exist in the bundle. 2.2.1.1. Manifests Bundle manifests refer to a set of Kubernetes manifests that define the deployment and RBAC model of the Operator. A bundle includes one CSV per directory and typically the CRDs that define the owned APIs of the CSV in its /manifests directory. Example bundle format layout etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml Additionally supported objects The following object types can also be optionally included in the /manifests directory of a bundle: Supported optional object types ClusterRole ClusterRoleBinding ConfigMap ConsoleCLIDownload ConsoleLink ConsoleQuickStart ConsoleYamlSample PodDisruptionBudget PriorityClass PrometheusRule Role RoleBinding Secret Service ServiceAccount ServiceMonitor VerticalPodAutoscaler When these optional objects are included in a bundle, Operator Lifecycle Manager (OLM) can create them from the bundle and manage their lifecycle along with the CSV: Lifecycle for optional objects When the CSV is deleted, OLM deletes the optional object. When the CSV is upgraded: If the name of the optional object is the same, OLM updates it in place. If the name of the optional object has changed between versions, OLM deletes and recreates it. 2.2.1.2. Annotations A bundle also includes an annotations.yaml file in its /metadata directory. This file defines higher level aggregate data that helps describe the format and package information about how the bundle should be added into an index of bundles: Example annotations.yaml annotations: operators.operatorframework.io.bundle.mediatype.v1: "registry+v1" 1 operators.operatorframework.io.bundle.manifests.v1: "manifests/" 2 operators.operatorframework.io.bundle.metadata.v1: "metadata/" 3 operators.operatorframework.io.bundle.package.v1: "test-operator" 4 operators.operatorframework.io.bundle.channels.v1: "beta,stable" 5 operators.operatorframework.io.bundle.channel.default.v1: "stable" 6 1 The media type or format of the Operator bundle. The registry+v1 format means it contains a CSV and its associated Kubernetes objects. 2 The path in the image to the directory that contains the Operator manifests. This label is reserved for future use and currently defaults to manifests/ . The value manifests.v1 implies that the bundle contains Operator manifests. 3 The path in the image to the directory that contains metadata files about the bundle. This label is reserved for future use and currently defaults to metadata/ . The value metadata.v1 implies that this bundle has Operator metadata. 4 The package name of the bundle. 5 The list of channels the bundle is subscribing to when added into an Operator Registry. 6 The default channel an Operator should be subscribed to when installed from a registry. Note In case of a mismatch, the annotations.yaml file is authoritative because the on-cluster Operator Registry that relies on these annotations only has access to this file. 2.2.1.3. Dependencies The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies. The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported: olm.package This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1 . olm.gvk With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place. olm.constraint This type declares generic constraints on arbitrary Operator properties. In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs: Example dependencies.yaml file dependencies: - type: olm.package value: packageName: prometheus version: ">0.27.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 Additional resources Operator Lifecycle Manager dependency resolution 2.2.1.4. About the opm CLI The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster. A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster. See CLI tools for steps on installing the opm CLI. 2.2.2. File-based catalogs File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. The goal of this format is to enable Operator catalog editing, composability, and extensibility. Editing With file-based catalogs, users interacting with the contents of a catalog are able to make direct changes to the format and verify that their changes are valid. Because this format is plain text JSON or YAML, catalog maintainers can easily manipulate catalog metadata by hand or with widely known and supported JSON or YAML tooling, such as the jq CLI. This editability enables the following features and user-defined extensions: Promoting an existing bundle to a new channel Changing the default channel of a package Custom algorithms for adding, updating, and removing upgrade edges Composability File-based catalogs are stored in an arbitrary directory hierarchy, which enables catalog composition. For example, consider two separate file-based catalog directories: catalogA and catalogB . A catalog maintainer can create a new combined catalog by making a new directory catalogC and copying catalogA and catalogB into it. This composability enables decentralized catalogs. The format permits Operator authors to maintain Operator-specific catalogs, and it permits maintainers to trivially build a catalog composed of individual Operator catalogs. File-based catalogs can be composed by combining multiple other catalogs, by extracting subsets of one catalog, or a combination of both of these. Note Duplicate packages and duplicate bundles within a package are not permitted. The opm validate command returns an error if any duplicates are found. Because Operator authors are most familiar with their Operator, its dependencies, and its upgrade compatibility, they are able to maintain their own Operator-specific catalog and have direct control over its contents. With file-based catalogs, Operator authors own the task of building and maintaining their packages in a catalog. Composite catalog maintainers, however, only own the task of curating the packages in their catalog and publishing the catalog to users. Extensibility The file-based catalog specification is a low-level representation of a catalog. While it can be maintained directly in its low-level form, catalog maintainers can build interesting extensions on top that can be used by their own custom tooling to make any number of mutations. For example, a tool could translate a high-level API, such as (mode=semver) , down to the low-level, file-based catalog format for upgrade edges. Or a catalog maintainer might need to customize all of the bundle metadata by adding a new property to bundles that meet a certain criteria. While this extensibility allows for additional official tooling to be developed on top of the low-level APIs for future OpenShift Container Platform releases, the major benefit is that catalog maintainers have this capability as well. Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs and Mirroring images for a disconnected installation using the oc-mirror plugin . 2.2.2.1. Directory structure File-based catalogs can be stored and loaded from directory-based file systems. The opm CLI loads the catalog by walking the root directory and recursing into subdirectories. The CLI attempts to load every file it finds and fails if any errors occur. Non-catalog files can be ignored using .indexignore files, which have the same rules for patterns and precedence as .gitignore files. Example .indexignore file # Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml Catalog maintainers have the flexibility to choose their desired layout, but it is recommended to store each package's file-based catalog blobs in separate subdirectories. Each individual file can be either JSON or YAML; it is not necessary for every file in a catalog to use the same format. Basic recommended structure catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json This recommended structure has the property that each subdirectory in the directory hierarchy is a self-contained catalog, which makes catalog composition, discovery, and navigation trivial file system operations. The catalog could also be included in a parent catalog by copying it into the parent catalog's root directory. 2.2.2.2. Schemas File-based catalogs use a format, based on the CUE language specification , that can be extended with arbitrary schemas. The following _Meta CUE schema defines the format that all file-based catalog blobs must adhere to: _Meta schema _Meta: { // schema is required and must be a non-empty string schema: string & !="" // package is optional, but if it's defined, it must be a non-empty string package?: string & !="" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } Note No CUE schemas listed in this specification should be considered exhaustive. The opm validate command has additional validations that are difficult or impossible to express concisely in CUE. An Operator Lifecycle Manager (OLM) catalog currently uses three schemas ( olm.package , olm.channel , and olm.bundle ), which correspond to OLM's existing package and bundle concepts. Each Operator package in a catalog requires exactly one olm.package blob, at least one olm.channel blob, and one or more olm.bundle blobs. Note All olm.* schemas are reserved for OLM-defined schemas. Custom schemas must use a unique prefix, such as a domain that you own. 2.2.2.2.1. olm.package schema The olm.package schema defines package-level metadata for an Operator. This includes its name, description, default channel, and icon. Example 2.1. olm.package schema #Package: { schema: "olm.package" // Package name name: string & !="" // A description of the package description?: string // The package's default channel defaultChannel: string & !="" // An optional icon icon?: { base64data: string mediatype: string } } 2.2.2.2.2. olm.channel schema The olm.channel schema defines a channel within a package, the bundle entries that are members of the channel, and the upgrade edges for those bundles. A bundle can included as an entry in multiple olm.channel blobs, but it can have only one entry per channel. It is valid for an entry's replaces value to reference another bundle name that cannot be found in this catalog or another catalog. However, all other channel invariants must hold true, such as a channel not having multiple heads. Example 2.2. olm.channel schema #Channel: { schema: "olm.channel" package: string & !="" name: string & !="" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !="" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !="" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=""] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !="" } Warning When using the skipRange field, the skipped Operator versions are pruned from the update graph and are therefore no longer installable by users with the spec.startingCSV property of Subscription objects. If you want to have direct (one version increment) updates to an Operator version from multiple versions, and also keep those versions available to users for installation, always use the skipRange field along with the replaces field. Ensure that the replaces field points to the immediate version of the Operator version in question. 2.2.2.2.3. olm.bundle schema Example 2.3. olm.bundle schema #Bundle: { schema: "olm.bundle" package: string & !="" name: string & !="" image: string & !="" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !="" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !="" } 2.2.2.3. Properties Properties are arbitrary pieces of metadata that can be attached to file-based catalog schemas. The type field is a string that effectively specifies the semantic and syntactic meaning of the value field. The value can be any arbitrary JSON or YAML. OLM defines a handful of property types, again using the reserved olm.* prefix. 2.2.2.3.1. olm.package property The olm.package property defines the package name and version. This is a required property on bundles, and there must be exactly one of these properties. The packageName field must match the bundle's first-class package field, and the version field must be a valid semantic version. Example 2.4. olm.package property #PropertyPackage: { type: "olm.package" value: { packageName: string & !="" version: string & !="" } } 2.2.2.3.2. olm.gvk property The olm.gvk property defines the group/version/kind (GVK) of a Kubernetes API that is provided by this bundle. This property is used by OLM to resolve a bundle with this property as a dependency for other bundles that list the same GVK as a required API. The GVK must adhere to Kubernetes GVK validations. Example 2.5. olm.gvk property #PropertyGVK: { type: "olm.gvk" value: { group: string & !="" version: string & !="" kind: string & !="" } } 2.2.2.3.3. olm.package.required The olm.package.required property defines the package name and version range of another package that this bundle requires. For every required package property a bundle lists, OLM ensures there is an Operator installed on the cluster for the listed package and in the required version range. The versionRange field must be a valid semantic version (semver) range. Example 2.6. olm.package.required property #PropertyPackageRequired: { type: "olm.package.required" value: { packageName: string & !="" versionRange: string & !="" } } 2.2.2.3.4. olm.gvk.required The olm.gvk.required property defines the group/version/kind (GVK) of a Kubernetes API that this bundle requires. For every required GVK property a bundle lists, OLM ensures there is an Operator installed on the cluster that provides it. The GVK must adhere to Kubernetes GVK validations. Example 2.7. olm.gvk.required property #PropertyGVKRequired: { type: "olm.gvk.required" value: { group: string & !="" version: string & !="" kind: string & !="" } } 2.2.2.4. Example catalog With file-based catalogs, catalog maintainers can focus on Operator curation and compatibility. Because Operator authors have already produced Operator-specific catalogs for their Operators, catalog maintainers can build their catalog by rendering each Operator catalog into a subdirectory of the catalog's root directory. There are many possible ways to build a file-based catalog; the following steps outline a simple approach: Maintain a single configuration file for the catalog, containing image references for each Operator in the catalog: Example catalog configuration file name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317 Run a script that parses the configuration file and creates a new catalog from its references: Example script name=USD(yq eval '.name' catalog.yaml) mkdir "USDname" yq eval '.name + "/" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + "|" + USDcatalog + "/" + .name + "/index.yaml"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render "USDimage" > "USDfile" done opm alpha generate dockerfile "USDname" indexImage=USD(yq eval '.repo + ":" + .tag' catalog.yaml) docker build -t "USDindexImage" -f "USDname.Dockerfile" . docker push "USDindexImage" 2.2.2.5. Guidelines Consider the following guidelines when maintaining file-based catalogs. 2.2.2.5.1. Immutable bundles The general advice with Operator Lifecycle Manager (OLM) is that bundle images and their metadata should be treated as immutable. If a broken bundle has been pushed to a catalog, you must assume that at least one of your users has upgraded to that bundle. Based on that assumption, you must release another bundle with an upgrade edge from the broken bundle to ensure users with the broken bundle installed receive an upgrade. OLM will not reinstall an installed bundle if the contents of that bundle are updated in the catalog. However, there are some cases where a change in the catalog metadata is preferred: Channel promotion: If you already released a bundle and later decide that you would like to add it to another channel, you can add an entry for your bundle in another olm.channel blob. New upgrade edges: If you release a new 1.2.z bundle version, for example 1.2.4 , but 1.3.0 is already released, you can update the catalog metadata for 1.3.0 to skip 1.2.4 . 2.2.2.5.2. Source control Catalog metadata should be stored in source control and treated as the source of truth. Updates to catalog images should include the following steps: Update the source-controlled catalog directory with a new commit. Build and push the catalog image. Use a consistent tagging taxonomy, such as :latest or :<target_cluster_version> , so that users can receive updates to a catalog as they become available. 2.2.2.6. CLI usage For instructions about creating file-based catalogs by using the opm CLI, see Managing custom catalogs . For reference documentation about the opm CLI commands related to managing file-based catalogs, see CLI tools . 2.2.2.7. Automation Operator authors and catalog maintainers are encouraged to automate their catalog maintenance with CI/CD workflows. Catalog maintainers can further improve on this by building GitOps automation to accomplish the following tasks: Check that pull request (PR) authors are permitted to make the requested changes, for example by updating their package's image reference. Check that the catalog updates pass the opm validate command. Check that the updated bundle or catalog image references exist, the catalog images run successfully in a cluster, and Operators from that package can be successfully installed. Automatically merge PRs that pass the checks. Automatically rebuild and republish the catalog image. 2.2.3. RukPak (Technology Preview) Important RukPak is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.12 introduces the platform Operator type as a Technology Preview feature. The platform Operator mechanism relies on the RukPak component, also introduced in OpenShift Container Platform 4.12, and its resources to manage content. RukPak consists of a series of controllers, known as provisioners , that install and manage content on a Kubernetes cluster. RukPak also provides two primary APIs: Bundle and BundleDeployment . These components work together to bring content onto the cluster and install it, generating resources within the cluster. A provisioner places a watch on both Bundle and BundleDeployment resources that refer to the provisioner explicitly. For a given bundle, the provisioner unpacks the contents of the Bundle resource onto the cluster. Then, given a BundleDeployment resource referring to that bundle, the provisioner installs the bundle contents and is responsible for managing the lifecycle of those resources. Two provisioners are currently implemented and bundled with RukPak: the plain provisioner that sources and unpacks plain+v0 bundles, and the registry provisioner that sources and unpacks Operator Lifecycle Manager (OLM) registry+v1 bundles. Additional resources Managing platform Operators Technology Preview restrictions for platform Operators 2.2.3.1. Bundle A RukPak Bundle object represents content to make available to other consumers in the cluster. Much like the contents of a container image must be pulled and unpacked in order for pod to start using them, Bundle objects are used to reference content that might need to be pulled and unpacked. In this sense, a bundle is a generalization of the image concept and can be used to represent any type of content. Bundles cannot do anything on their own; they require a provisioner to unpack and make their content available in the cluster. They can be unpacked to any arbitrary storage medium, such as a tar.gz file in a directory mounted into the provisioner pods. Each Bundle object has an associated spec.provisionerClassName field that indicates the Provisioner object that watches and unpacks that particular bundle type. Example Bundle object configured to work with the plain provisioner apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain Note Bundles are considered immutable after they are created. 2.2.3.1.1. Bundle immutability After a Bundle object is accepted by the API server, the bundle is considered an immutable artifact by the rest of the RukPak system. This behavior enforces the notion that a bundle represents some unique, static piece of content to source onto the cluster. A user can have confidence that a particular bundle is pointing to a specific set of manifests and cannot be updated without creating a new bundle. This property is true for both standalone bundles and dynamic bundles created by an embedded BundleTemplate object. Bundle immutability is enforced by the core RukPak webhook. This webhook watches Bundle object events and, for any update to a bundle, checks whether the spec field of the existing bundle is semantically equal to that in the proposed updated bundle. If they are not equal, the update is rejected by the webhook. Other Bundle object fields, such as metadata or status , are updated during the bundle's lifecycle; it is only the spec field that is considered immutable. Applying a Bundle object and then attempting to update its spec should fail. For example, the following example creates a bundle: USD oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF Example output bundle.core.rukpak.io/combo-tag-ref created Then, patching the bundle to point to a newer tag returns an error: USD oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}' Example output Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable The core RukPak admission webhook rejected the patch because the spec of the bundle is immutable. The recommended method to change the content of a bundle is by creating a new Bundle object instead of updating it in-place. Further immutability considerations While the spec field of the Bundle object is immutable, it is still possible for a BundleDeployment object to pivot to a newer version of bundle content without changing the underlying spec field. This unintentional pivoting could occur in the following scenario: A user sets an image tag, a Git branch, or a Git tag in the spec.source field of the Bundle object. The image tag moves to a new digest, a user pushes changes to a Git branch, or a user deletes and re-pushes a Git tag on a different commit. A user does something to cause the bundle unpack pod to be re-created, such as deleting the unpack pod. If this scenario occurs, the new content from step 2 is unpacked as a result of step 3. The bundle deployment detects the changes and pivots to the newer version of the content. This is similar to pod behavior, where one of the pod's container images uses a tag, the tag is moved to a different digest, and then at some point in the future the existing pod is rescheduled on a different node. At that point, the node pulls the new image at the new digest and runs something different without the user explicitly asking for it. To be confident that the underlying Bundle spec content does not change, use a digest-based image or a Git commit reference when creating the bundle. 2.2.3.1.2. Plain bundle spec A plain bundle in RukPak is a collection of static, arbitrary, Kubernetes YAML manifests in a given directory. The currently implemented plain bundle format is the plain+v0 format. The name of the bundle format, plain+v0 , combines the type of bundle ( plain ) with the current schema version ( v0 ). Note The plain+v0 bundle format is at schema version v0 , which means it is an experimental format that is subject to change. For example, the following shows the file tree in a plain+v0 bundle. It must have a manifests/ directory containing the Kubernetes resources required to deploy an application. Example plain+v0 bundle file tree manifests ├── namespace.yaml ├── cluster_role.yaml ├── role.yaml ├── serviceaccount.yaml ├── cluster_role_binding.yaml ├── role_binding.yaml └── deployment.yaml The static manifests must be located in the manifests/ directory with at least one resource in it for the bundle to be a valid plain+v0 bundle that the provisioner can unpack. The manifests/ directory must also be flat; all manifests must be at the top-level with no subdirectories. Important Do not include any content in the manifests/ directory of a plain bundle that are not static manifests. Otherwise, a failure will occur when creating content on-cluster from that bundle. Any file that would not successfully apply with the oc apply command will result in an error. Multi-object YAML or JSON files are valid, as well. 2.2.3.1.3. Registry bundle spec A registry bundle, or registry+v1 bundle, contains a set of static Kubernetes YAML manifests organized in the legacy Operator Lifecycle Manager (OLM) bundle format. Additional resources Legacy OLM bundle format 2.2.3.2. BundleDeployment Warning A BundleDeployment object changes the state of a Kubernetes cluster by installing and removing objects. It is important to verify and trust the content that is being installed and limit access, by using RBAC, to the BundleDeployment API to only those who require those permissions. The RukPak BundleDeployment API points to a Bundle object and indicates that it should be active. This includes pivoting from older versions of an active bundle. A BundleDeployment object might also include an embedded spec for a desired bundle. Much like pods generate instances of container images, a bundle deployment generates a deployed version of a bundle. A bundle deployment can be seen as a generalization of the pod concept. The specifics of how a bundle deployment makes changes to a cluster based on a referenced bundle is defined by the provisioner that is configured to watch that bundle deployment. Example BundleDeployment object configured to work with the plain provisioner apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain 2.2.3.3. Provisioner A RukPak provisioner is a controller that understands the BundleDeployment and Bundle APIs and can take action. Each provisioner is assigned a unique ID and is responsible for reconciling Bundle and BundleDeployment objects with a spec.provisionerClassName field that matches that particular ID. For example, the plain provisioner is able to unpack a given plain+v0 bundle onto a cluster and then instantiate it, making the content of the bundle available in the cluster. 2.3. Operator Framework glossary of common terms This topic provides a glossary of common terms related to the Operator Framework, including Operator Lifecycle Manager (OLM) and the Operator SDK. 2.3.1. Common Operator Framework terms 2.3.1.1. Bundle In the bundle format, a bundle is a collection of an Operator CSV, manifests, and metadata. Together, they form a unique version of an Operator that can be installed onto the cluster. 2.3.1.2. Bundle image In the bundle format, a bundle image is a container image that is built from Operator manifests and that contains one bundle. Bundle images are stored and distributed by Open Container Initiative (OCI) spec container registries, such as Quay.io or DockerHub. 2.3.1.3. Catalog source A catalog source represents a store of metadata that OLM can query to discover and install Operators and their dependencies. 2.3.1.4. Channel A channel defines a stream of updates for an Operator and is used to roll out updates for subscribers. The head points to the latest version of that channel. For example, a stable channel would have all stable versions of an Operator arranged from the earliest to the latest. An Operator can have several channels, and a subscription binding to a certain channel would only look for updates in that channel. 2.3.1.5. Channel head A channel head refers to the latest known update in a particular channel. 2.3.1.6. Cluster service version A cluster service version (CSV) is a YAML manifest created from Operator metadata that assists OLM in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on. 2.3.1.7. Dependency An Operator may have a dependency on another Operator being present in the cluster. For example, the Vault Operator has a dependency on the etcd Operator for its data persistence layer. OLM resolves dependencies by ensuring that all specified versions of Operators and CRDs are installed on the cluster during the installation phase. This dependency is resolved by finding and installing an Operator in a catalog that satisfies the required CRD API, and is not related to packages or bundles. 2.3.1.8. Index image In the bundle format, an index image refers to an image of a database (a database snapshot) that contains information about Operator bundles including CSVs and CRDs of all versions. This index can host a history of Operators on a cluster and be maintained by adding or removing Operators using the opm CLI tool. 2.3.1.9. Install plan An install plan is a calculated list of resources to be created to automatically install or upgrade a CSV. 2.3.1.10. Multitenancy A tenant in OpenShift Container Platform is a user or group of users that share common access and privileges for a set of deployed workloads, typically represented by a namespace or project. You can use tenants to provide a level of isolation between different groups or teams. When a cluster is shared by multiple users or groups, it is considered a multitenant cluster. 2.3.1.11. Operator group An Operator group configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their CR in a list of namespaces or cluster-wide. 2.3.1.12. Package In the bundle format, a package is a directory that encloses all released history of an Operator with each version. A released version of an Operator is described in a CSV manifest alongside the CRDs. 2.3.1.13. Registry A registry is a database that stores bundle images of Operators, each with all of its latest and historical versions in all channels. 2.3.1.14. Subscription A subscription keeps CSVs up to date by tracking a channel in a package. 2.3.1.15. Update graph An update graph links versions of CSVs together, similar to the update graph of any other packaged software. Operators can be installed sequentially, or certain versions can be skipped. The update graph is expected to grow only at the head with newer versions being added. 2.4. Operator Lifecycle Manager (OLM) 2.4.1. Operator Lifecycle Manager concepts and resources This guide provides an overview of the concepts that drive Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.1.1. What is Operator Lifecycle Manager? Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Figure 2.2. Operator Lifecycle Manager workflow OLM runs by default in OpenShift Container Platform 4.13, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it. 2.4.1.2. OLM resources The following custom resource definitions (CRDs) are defined and managed by Operator Lifecycle Manager (OLM): Table 2.1. CRDs managed by OLM and Catalog Operators Resource Short name Description ClusterServiceVersion (CSV) csv Application metadata. For example: name, version, icon, required resources. CatalogSource catsrc A repository of CSVs, CRDs, and packages that define an application. Subscription sub Keeps CSVs up to date by tracking a channel in a package. InstallPlan ip Calculated list of resources to be created to automatically install or upgrade a CSV. OperatorGroup og Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. OperatorConditions - Creates a communication channel between OLM and an Operator it manages. Operators can write to the Status.Conditions array to communicate complex states to OLM. 2.4.1.2.1. Cluster service version A cluster service version (CSV) represents a specific version of a running Operator on an OpenShift Container Platform cluster. It is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in the cluster. OLM requires this metadata about an Operator to ensure that it can be kept running safely on a cluster, and to provide information about how updates should be applied as new versions of the Operator are published. This is similar to packaging software for a traditional operating system; think of the packaging step for OLM as the stage at which you make your rpm , deb , or apk bundle. A CSV includes the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its name, version, description, labels, repository link, and logo. A CSV is also a source of technical information required to run the Operator, such as which custom resources (CRs) it manages or depends on, RBAC rules, cluster requirements, and install strategies. This information tells OLM how to create required resources and set up the Operator as a deployment. 2.4.1.2.2. Catalog source A catalog source represents a store of metadata, typically by referencing an index image stored in a container registry. Operator Lifecycle Manager (OLM) queries catalog sources to discover and install Operators and their dependencies. OperatorHub in the OpenShift Container Platform web console also displays the Operators provided by catalog sources. Tip Cluster administrators can view the full list of Operators provided by an enabled catalog source on a cluster by using the Administration Cluster Settings Configuration OperatorHub page in the web console. The spec of a CatalogSource object indicates how to construct a pod or how to communicate with a service that serves the Operator Registry gRPC API. Example 2.8. Example CatalogSource object \ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace 1 Name for the CatalogSource object. This value is also used as part of the name for the related pod that is created in the requested namespace. 2 Namespace to create the catalog in. To make the catalog available cluster-wide in all namespaces, set this value to openshift-marketplace . The default Red Hat-provided catalog sources also use the openshift-marketplace namespace. Otherwise, set the value to a specific namespace to make the Operator only available in that namespace. 3 Optional: To avoid cluster upgrades potentially leaving Operator installations in an unsupported state or without a continued update path, you can enable automatically changing your Operator catalog's index image version as part of cluster upgrades. Set the olm.catalogImageTemplate annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. The annotation overwrites the spec.image field at run time. See the "Image template for custom catalog sources" section for more details. 4 Display name for the catalog in the web console and CLI. 5 Index image for the catalog. Optionally, can be omitted when using the olm.catalogImageTemplate annotation, which sets the pull spec at run time. 6 Weight for the catalog source. OLM uses the weight for prioritization during dependency resolution. A higher weight indicates the catalog is preferred over lower-weighted catalogs. 7 Source types include the following: grpc with an image reference: OLM pulls the image and runs the pod, which is expected to serve a compliant API. grpc with an address field: OLM attempts to contact the gRPC API at the given address. This should not be used in most cases. configmap : OLM parses config map data and runs a pod that can serve the gRPC API over it. 8 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . 9 Optional: For grpc type catalog sources, overrides the default node selector for the pod serving the content in spec.image , if defined. 10 Optional: For grpc type catalog sources, overrides the default priority class name for the pod serving the content in spec.image , if defined. Kubernetes provides system-cluster-critical and system-node-critical priority classes by default. Setting the field to empty ( "" ) assigns the pod the default priority. Other priority classes can be defined manually. 11 Optional: For grpc type catalog sources, overrides the default tolerations for the pod serving the content in spec.image , if defined. 12 Automatically check for new versions at a given interval to stay up-to-date. 13 Last observed state of the catalog connection. For example: READY : A connection is successfully established. CONNECTING : A connection is attempting to establish. TRANSIENT_FAILURE : A temporary problem has occurred while attempting to establish a connection, such as a timeout. The state will eventually switch back to CONNECTING and try again. See States of Connectivity in the gRPC documentation for more details. 14 Latest time the container registry storing the catalog image was polled to ensure the image is up-to-date. 15 Status information for the catalog's Operator Registry service. Referencing the name of a CatalogSource object in a subscription instructs OLM where to search to find a requested Operator: Example 2.9. Example Subscription object referencing a catalog source apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace Additional resources Understanding OperatorHub Red Hat-provided Operator catalogs Adding a catalog source to a cluster Catalog priority Viewing Operator catalog source status by using the CLI Understanding and managing pod security admission Catalog source pod scheduling 2.4.1.2.2.1. Image template for custom catalog sources Operator compatibility with the underlying cluster can be expressed by a catalog source in various ways. One way, which is used for the default Red Hat-provided catalog sources, is to identify image tags for index images that are specifically created for a particular platform release, for example OpenShift Container Platform 4.13. During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.12 to 4.13, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from: registry.redhat.io/redhat/redhat-operator-index:v4.12 to: registry.redhat.io/redhat/redhat-operator-index:v4.13 However, the CVO does not automatically update image tags for custom catalogs. To ensure users are left with a compatible and supported Operator installation after a cluster upgrade, custom catalogs should also be kept updated to reference an updated index image. Starting in OpenShift Container Platform 4.9, cluster administrators can add the olm.catalogImageTemplate annotation in the CatalogSource object for custom catalogs to an image reference that includes a template. The following Kubernetes version variables are supported for use in the template: kube_major_version kube_minor_version kube_patch_version Note You must specify the Kubernetes cluster version and not an OpenShift Container Platform cluster version, as the latter is not currently available for templating. Provided that you have created and pushed an index image with a tag specifying the updated Kubernetes version, setting this annotation enables the index image versions in custom catalogs to be automatically changed after a cluster upgrade. The annotation value is used to set or update the image reference in the spec.image field of the CatalogSource object. This helps avoid cluster upgrades leaving Operator installations in unsupported states or without a continued update path. Important You must ensure that the index image with the updated tag, in whichever registry it is stored in, is accessible by the cluster at the time of the cluster upgrade. Example 2.10. Example catalog source with an image template apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.26 priority: -400 publisher: Example Org Note If the spec.image field and the olm.catalogImageTemplate annotation are both set, the spec.image field is overwritten by the resolved value from the annotation. If the annotation does not resolve to a usable pull spec, the catalog source falls back to the set spec.image value. If the spec.image field is not set and the annotation does not resolve to a usable pull spec, OLM stops reconciliation of the catalog source and sets it into a human-readable error condition. For an OpenShift Container Platform 4.13 cluster, which uses Kubernetes 1.26, the olm.catalogImageTemplate annotation in the preceding example resolves to the following image reference: quay.io/example-org/example-catalog:v1.26 For future releases of OpenShift Container Platform, you can create updated index images for your custom catalogs that target the later Kubernetes version that is used by the later OpenShift Container Platform version. With the olm.catalogImageTemplate annotation set before the upgrade, upgrading the cluster to the later OpenShift Container Platform version would then automatically update the catalog's index image as well. 2.4.1.2.2.2. Catalog health requirements Operator catalogs on a cluster are interchangeable from the perspective of installation resolution; a Subscription object might reference a specific catalog, but dependencies are resolved using all catalogs on the cluster. For example, if Catalog A is unhealthy, a subscription referencing Catalog A could resolve a dependency in Catalog B, which the cluster administrator might not have been expecting, because B normally had a lower catalog priority than A. As a result, OLM requires that all catalogs with a given global namespace (for example, the default openshift-marketplace namespace or a custom global namespace) are healthy. When a catalog is unhealthy, all Operator installation or update operations within its shared global namespace will fail with a CatalogSourcesUnhealthy condition. If these operations were permitted in an unhealthy state, OLM might make resolution and installation decisions that were unexpected to the cluster administrator. As a cluster administrator, if you observe an unhealthy catalog and want to consider the catalog as invalid and resume Operator installations, see the "Removing custom catalogs" or "Disabling the default OperatorHub catalog sources" sections for information about removing the unhealthy catalog. Additional resources Removing custom catalogs Disabling the default OperatorHub catalog sources 2.4.1.2.3. Subscription A subscription , defined by a Subscription object, represents an intention to install an Operator. It is the custom resource that relates an Operator to a catalog source. Subscriptions describe which channel of an Operator package to subscribe to, and whether to perform updates automatically or manually. If set to automatic, the subscription ensures Operator Lifecycle Manager (OLM) manages and upgrades the Operator to ensure that the latest version is always running in the cluster. Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace This Subscription object defines the name and namespace of the Operator, as well as the catalog from which the Operator data can be found. The channel, such as alpha , beta , or stable , helps determine which Operator stream should be installed from the catalog source. The names of channels in a subscription can differ between Operators, but the naming scheme should follow a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). In addition to being easily visible from the OpenShift Container Platform web console, it is possible to identify when there is a newer version of an Operator available by inspecting the status of the related subscription. The value associated with the currentCSV field is the newest version that is known to OLM, and installedCSV is the version that is installed on the cluster. Additional resources Multitenancy and Operator colocation Viewing Operator subscription status by using the CLI 2.4.1.2.4. Install plan An install plan , defined by an InstallPlan object, describes a set of resources that Operator Lifecycle Manager (OLM) creates to install or upgrade to a specific version of an Operator. The version is defined by a cluster service version (CSV). To install an Operator, a cluster administrator, or a user who has been granted Operator installation permissions, must first create a Subscription object. A subscription represents the intent to subscribe to a stream of available versions of an Operator from a catalog source. The subscription then creates an InstallPlan object to facilitate the installation of the resources for the Operator. The install plan must then be approved according to one of the following approval strategies: If the subscription's spec.installPlanApproval field is set to Automatic , the install plan is approved automatically. If the subscription's spec.installPlanApproval field is set to Manual , the install plan must be manually approved by a cluster administrator or user with proper permissions. After the install plan is approved, OLM creates the specified resources and installs the Operator in the namespace that is specified by the subscription. Example 2.11. Example InstallPlan object apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: ... catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- ... name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- ... name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- ... name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- ... name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- ... name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created ... Additional resources Multitenancy and Operator colocation Allowing non-cluster administrators to install Operators 2.4.1.2.5. Operator groups An Operator group , defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators. The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments. Additional resources Operator groups 2.4.1.2.6. Operator conditions As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator. OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource. Note By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic. Additional resources Operator conditions 2.4.2. Operator Lifecycle Manager architecture This guide outlines the component architecture of Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.2.1. Component responsibilities Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator. Each of these Operators is responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework: Table 2.2. CRDs managed by OLM and Catalog Operators Resource Short name Owner Description ClusterServiceVersion (CSV) csv OLM Application metadata: name, version, icon, required resources, installation, and so on. InstallPlan ip Catalog Calculated list of resources to be created to automatically install or upgrade a CSV. CatalogSource catsrc Catalog A repository of CSVs, CRDs, and packages that define an application. Subscription sub Catalog Used to keep CSVs up to date by tracking a channel in a package. OperatorGroup og OLM Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. Each of these Operators is also responsible for creating the following resources: Table 2.3. Resources created by OLM and Catalog Operators Resource Owner Deployments OLM ServiceAccounts (Cluster)Roles (Cluster)RoleBindings CustomResourceDefinitions (CRDs) Catalog ClusterServiceVersions 2.4.2.2. OLM Operator The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster. The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application. The OLM Operator uses the following workflow: Watch for cluster service versions (CSVs) in a namespace and check that requirements are met. If requirements are met, run the install strategy for the CSV. Note A CSV must be an active member of an Operator group for the install strategy to run. 2.4.2.3. Catalog Operator The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions. To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user. The Catalog Operator uses the following workflow: Connect to each catalog source in the cluster. Watch for unresolved install plans created by a user, and if found: Find the CSV matching the name requested and add the CSV as a resolved resource. For each managed or required CRD, add the CRD as a resolved resource. For each required CRD, find the CSV that manages it. Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically. Watch for catalog sources and subscriptions and create install plans based on them. 2.4.2.4. Catalog Registry The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels. A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version. 2.4.3. Operator Lifecycle Manager workflow This guide outlines the workflow of Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.3.1. Operator installation and upgrade workflow in OLM In the Operator Lifecycle Manager (OLM) ecosystem, the following resources are used to resolve Operator installations and upgrades: ClusterServiceVersion (CSV) CatalogSource Subscription Operator metadata, defined in CSVs, can be stored in a collection called a catalog source. OLM uses catalog sources, which use the Operator Registry API , to query for available Operators as well as upgrades for installed Operators. Figure 2.3. Catalog source overview Within a catalog source, Operators are organized into packages and streams of updates called channels , which should be a familiar update pattern from OpenShift Container Platform or other software on a continuous release cycle like web browsers. Figure 2.4. Packages and channels in a Catalog source A user indicates a particular package and channel in a particular catalog source in a subscription , for example an etcd package and its alpha channel. If a subscription is made to a package that has not yet been installed in the namespace, the latest Operator for that package is installed. Note OLM deliberately avoids version comparisons, so the "latest" or "newest" Operator available from a given catalog channel package path does not necessarily need to be the highest version number. It should be thought of more as the head reference of a channel, similar to a Git repository. Each CSV has a replaces parameter that indicates which Operator it replaces. This builds a graph of CSVs that can be queried by OLM, and updates can be shared between channels. Channels can be thought of as entry points into the graph of updates: Figure 2.5. OLM graph of available channel updates Example channels in a package packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha For OLM to successfully query for updates, given a catalog source, package, channel, and CSV, a catalog must be able to return, unambiguously and deterministically, a single CSV that replaces the input CSV. 2.4.3.1.1. Example upgrade path For an example upgrade scenario, consider an installed Operator corresponding to CSV version 0.1.1 . OLM queries the catalog source and detects an upgrade in the subscribed channel with new CSV version 0.1.3 that replaces an older but not-installed CSV version 0.1.2 , which in turn replaces the older and installed CSV version 0.1.1 . OLM walks back from the channel head to versions via the replaces field specified in the CSVs to determine the upgrade path 0.1.3 0.1.2 0.1.1 ; the direction of the arrow indicates that the former replaces the latter. OLM upgrades the Operator one version at the time until it reaches the channel head. For this given scenario, OLM installs Operator version 0.1.2 to replace the existing Operator version 0.1.1 . Then, it installs Operator version 0.1.3 to replace the previously installed Operator version 0.1.2 . At this point, the installed operator version 0.1.3 matches the channel head and the upgrade is completed. 2.4.3.1.2. Skipping upgrades The basic path for upgrades in OLM is: A catalog source is updated with one or more updates to an Operator. OLM traverses every version of the Operator until reaching the latest version the catalog source contains. However, sometimes this is not a safe operation to perform. There will be cases where a published version of an Operator should never be installed on a cluster if it has not already, for example because a version introduces a serious vulnerability. In those cases, OLM must consider two cluster states and provide an update graph that supports both: The "bad" intermediate Operator has been seen by the cluster and installed. The "bad" intermediate Operator has not yet been installed onto the cluster. By shipping a new catalog and adding a skipped release, OLM is ensured that it can always get a single unique update regardless of the cluster state and whether it has seen the bad update yet. Example CSV with skipped release apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1 Consider the following example of Old CatalogSource and New CatalogSource . Figure 2.6. Skipping updates This graph maintains that: Any Operator found in Old CatalogSource has a single replacement in New CatalogSource . Any Operator found in New CatalogSource has a single replacement in New CatalogSource . If the bad update has not yet been installed, it will never be. 2.4.3.1.3. Replacing multiple Operators Creating New CatalogSource as described requires publishing CSVs that replace one Operator, but can skip several. This can be accomplished using the skipRange annotation: olm.skipRange: <semver_range> where <semver_range> has the version range format supported by the semver library . When searching catalogs for updates, if the head of a channel has a skipRange annotation and the currently installed Operator has a version field that falls in the range, OLM updates to the latest entry in the channel. The order of precedence is: Channel head in the source specified by sourceName on the subscription, if the other criteria for skipping are met. The Operator that replaces the current one, in the source specified by sourceName . Channel head in another source that is visible to the subscription, if the other criteria for skipping are met. The Operator that replaces the current one in any source visible to the subscription. Example CSV with skipRange apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2' 2.4.3.1.4. Z-stream support A z-stream , or patch release, must replace all z-stream releases for the same minor version. OLM does not consider major, minor, or patch versions, it just needs to build the correct graph in a catalog. In other words, OLM must be able to take a graph as in Old CatalogSource and, similar to before, generate a graph as in New CatalogSource : Figure 2.7. Replacing several Operators This graph maintains that: Any Operator found in Old CatalogSource has a single replacement in New CatalogSource . Any Operator found in New CatalogSource has a single replacement in New CatalogSource . Any z-stream release in Old CatalogSource will update to the latest z-stream release in New CatalogSource . Unavailable releases can be considered "virtual" graph nodes; their content does not need to exist, the registry just needs to respond as if the graph looks like this. 2.4.4. Operator Lifecycle Manager dependency resolution This guide outlines dependency resolution and custom resource definition (CRD) upgrade lifecycles with Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.4.1. About dependency resolution Operator Lifecycle Manager (OLM) manages the dependency resolution and upgrade lifecycle of running Operators. In many ways, the problems OLM faces are similar to other system or language package managers, such as yum and rpm . However, there is one constraint that similar systems do not generally have that OLM does: because Operators are always running, OLM attempts to ensure that you are never left with a set of Operators that do not work with each other. As a result, OLM must never create the following scenarios: Install a set of Operators that require APIs that cannot be provided Update an Operator in a way that breaks another that depends upon it This is made possible with two types of data: Properties Typed metadata about the Operator that constitutes the public interface for it in the dependency resolver. Examples include the group/version/kind (GVK) of the APIs provided by the Operator and the semantic version (semver) of the Operator. Constraints or dependencies An Operator's requirements that should be satisfied by other Operators that might or might not have already been installed on the target cluster. These act as queries or filters over all available Operators and constrain the selection during dependency resolution and installation. Examples include requiring a specific API to be available on the cluster or expecting a particular Operator with a particular version to be installed. OLM converts these properties and constraints into a system of Boolean formulas and passes them to a SAT solver, a program that establishes Boolean satisfiability, which does the work of determining what Operators should be installed. 2.4.4.2. Operator properties All Operators in a catalog have the following properties: olm.package Includes the name of the package and the version of the Operator olm.gvk A single property for each provided API from the cluster service version (CSV) Additional properties can also be directly declared by an Operator author by including a properties.yaml file in the metadata/ directory of the Operator bundle. Example arbitrary property properties: - type: olm.kubeversion value: version: "1.16.0" 2.4.4.2.1. Arbitrary properties Operator authors can declare arbitrary properties in a properties.yaml file in the metadata/ directory of the Operator bundle. These properties are translated into a map data structure that is used as an input to the Operator Lifecycle Manager (OLM) resolver at runtime. These properties are opaque to the resolver as it does not understand the properties, but it can evaluate the generic constraints against those properties to determine if the constraints can be satisfied given the properties list. Example arbitrary properties properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource This structure can be used to construct a Common Expression Language (CEL) expression for generic constraints. Additional resources Common Expression Language (CEL) constraints 2.4.4.3. Operator dependencies The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies. The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported: olm.package This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1 . olm.gvk With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place. olm.constraint This type declares generic constraints on arbitrary Operator properties. In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs: Example dependencies.yaml file dependencies: - type: olm.package value: packageName: prometheus version: ">0.27.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 2.4.4.4. Generic constraints An olm.constraint property declares a dependency constraint of a particular type, differentiating non-constraint and constraint properties. Its value field is an object containing a failureMessage field holding a string-representation of the constraint message. This message is surfaced as an informative comment to users if the constraint is not satisfiable at runtime. The following keys denote the available constraint types: gvk Type whose value and interpretation is identical to the olm.gvk type package Type whose value and interpretation is identical to the olm.package type cel A Common Expression Language (CEL) expression evaluated at runtime by the Operator Lifecycle Manager (OLM) resolver over arbitrary bundle properties and cluster information all , any , not Conjunction, disjunction, and negation constraints, respectively, containing one or more concrete constraints, such as gvk or a nested compound constraint 2.4.4.4.1. Common Expression Language (CEL) constraints The cel constraint type supports Common Expression Language (CEL) as the expression language. The cel struct has a rule field which contains the CEL expression string that is evaluated against Operator properties at runtime to determine if the Operator satisfies the constraint. Example cel constraint type: olm.constraint value: failureMessage: 'require to have "certified"' cel: rule: 'properties.exists(p, p.type == "certified")' The CEL syntax supports a wide range of logical operators, such as AND and OR . As a result, a single CEL expression can have multiple rules for multiple conditions that are linked together by these logical operators. These rules are evaluated against a dataset of multiple different properties from a bundle or any given source, and the output is solved into a single bundle or Operator that satisfies all of those rules within a single constraint. Example cel constraint with multiple rules type: olm.constraint value: failureMessage: 'require to have "certified" and "stable" properties' cel: rule: 'properties.exists(p, p.type == "certified") && properties.exists(p, p.type == "stable")' 2.4.4.4.2. Compound constraints (all, any, not) Compound constraint types are evaluated following their logical definitions. The following is an example of a conjunctive constraint ( all ) of two packages and one GVK. That is, they must all be satisfied by installed bundles: Example all constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because... all: constraints: - failureMessage: Package blue is needed for... package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for... gvk: group: greens.example.com version: v1 kind: Green The following is an example of a disjunctive constraint ( any ) of three versions of the same GVK. That is, at least one must be satisfied by installed bundles: Example any constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because... any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue The following is an example of a negation constraint ( not ) of one version of a GVK. That is, this GVK cannot be provided by any bundle in the result set: Example not constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for... package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because... not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens The negation semantics might appear unclear in the not constraint context. To clarify, the negation is really instructing the resolver to remove any possible solution that includes a particular GVK, package at a version, or satisfies some child compound constraint from the result set. As a corollary, the not compound constraint should only be used within all or any constraints, because negating without first selecting a possible set of dependencies does not make sense. 2.4.4.4.3. Nested compound constraints A nested compound constraint, one that contains at least one child compound constraint along with zero or more simple constraints, is evaluated from the bottom up following the procedures for each previously described constraint type. The following is an example of a disjunction of conjunctions, where one, the other, or both can satisfy the constraint: Example nested compound constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because... any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue Note The maximum raw size of an olm.constraint type is 64KB to limit resource exhaustion attacks. 2.4.4.5. Dependency preferences There can be many options that equally satisfy a dependency of an Operator. The dependency resolver in Operator Lifecycle Manager (OLM) determines which option best fits the requirements of the requested Operator. As an Operator author or user, it can be important to understand how these choices are made so that dependency resolution is clear. 2.4.4.5.1. Catalog priority On OpenShift Container Platform cluster, OLM reads catalog sources to know which Operators are available for installation. Example CatalogSource object apiVersion: "operators.coreos.com/v1alpha1" kind: "CatalogSource" metadata: name: "my-operators" namespace: "operators" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: "My Operators" priority: 100 1 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . A CatalogSource object has a priority field, which is used by the resolver to know how to prefer options for a dependency. There are two rules that govern catalog preference: Options in higher-priority catalogs are preferred to options in lower-priority catalogs. Options in the same catalog as the dependent are preferred to any other catalogs. 2.4.4.5.2. Channel ordering An Operator package in a catalog is a collection of update channels that a user can subscribe to in an OpenShift Container Platform cluster. Channels can be used to provide a particular stream of updates for a minor release ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). It is likely that a dependency might be satisfied by Operators in the same package, but different channels. For example, version 1.2 of an Operator might exist in both the stable and fast channels. Each package has a default channel, which is always preferred to non-default channels. If no option in the default channel can satisfy a dependency, options are considered from the remaining channels in lexicographic order of the channel name. 2.4.4.5.3. Order within a channel There are almost always multiple options to satisfy a dependency within a single channel. For example, Operators in one package and channel provide the same set of APIs. When a user creates a subscription, they indicate which channel to receive updates from. This immediately reduces the search to just that one channel. But within the channel, it is likely that many Operators satisfy a dependency. Within a channel, newer Operators that are higher up in the update graph are preferred. If the head of a channel satisfies a dependency, it will be tried first. 2.4.4.5.4. Other constraints In addition to the constraints supplied by package dependencies, OLM includes additional constraints to represent the desired user state and enforce resolution invariants. 2.4.4.5.4.1. Subscription constraint A subscription constraint filters the set of Operators that can satisfy a subscription. Subscriptions are user-supplied constraints for the dependency resolver. They declare the intent to either install a new Operator if it is not already on the cluster, or to keep an existing Operator updated. 2.4.4.5.4.2. Package constraint Within a namespace, no two Operators may come from the same package. 2.4.4.5.5. Additional resources Catalog health requirements 2.4.4.6. CRD upgrades OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions: All existing serving versions in the current CRD are present in the new CRD. All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD. Additional resources Adding a new CRD version Deprecating or removing a CRD version 2.4.4.7. Dependency best practices When specifying dependencies, there are best practices you should consider. Depend on APIs or a specific version range of Operators Operators can add or remove APIs at any time; always specify an olm.gvk dependency on any APIs your Operators requires. The exception to this is if you are specifying olm.package constraints instead. Set a minimum version The Kubernetes documentation on API changes describes what changes are allowed for Kubernetes-style Operators. These versioning conventions allow an Operator to update an API without bumping the API version, as long as the API is backwards-compatible. For Operator dependencies, this means that knowing the API version of a dependency might not be enough to ensure the dependent Operator works as intended. For example: TestOperator v1.0.0 provides v1alpha1 API version of the MyObject resource. TestOperator v1.0.1 adds a new field spec.newfield to MyObject , but still at v1alpha1. Your Operator might require the ability to write spec.newfield into the MyObject resource. An olm.gvk constraint alone is not enough for OLM to determine that you need TestOperator v1.0.1 and not TestOperator v1.0.0. Whenever possible, if a specific Operator that provides an API is known ahead of time, specify an additional olm.package constraint to set a minimum. Omit a maximum version or allow a very wide range Because Operators provide cluster-scoped resources such as API services and CRDs, an Operator that specifies a small window for a dependency might unnecessarily constrain updates for other consumers of that dependency. Whenever possible, do not set a maximum version. Alternatively, set a very wide semantic range to prevent conflicts with other Operators. For example, >1.0.0 <2.0.0 . Unlike with conventional package managers, Operator authors explicitly encode that updates are safe through channels in OLM. If an update is available for an existing subscription, it is assumed that the Operator author is indicating that it can update from the version. Setting a maximum version for a dependency overrides the update stream of the author by unnecessarily truncating it at a particular upper bound. Note Cluster administrators cannot override dependencies set by an Operator author. However, maximum versions can and should be set if there are known incompatibilities that must be avoided. Specific versions can be omitted with the version range syntax, for example > 1.0.0 !1.2.1 . Additional resources Kubernetes documentation: Changing the API 2.4.4.8. Dependency caveats When specifying dependencies, there are caveats you should consider. No compound constraints (AND) There is currently no method for specifying an AND relationship between constraints. In other words, there is no way to specify that one Operator depends on another Operator that both provides a given API and has version >1.1.0 . This means that when specifying a dependency such as: dependencies: - type: olm.package value: packageName: etcd version: ">3.1.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 It would be possible for OLM to satisfy this with two Operators: one that provides EtcdCluster and one that has version >3.1.0 . Whether that happens, or whether an Operator is selected that satisfies both constraints, depends on the ordering that potential options are visited. Dependency preferences and ordering options are well-defined and can be reasoned about, but to exercise caution, Operators should stick to one mechanism or the other. Cross-namespace compatibility OLM performs dependency resolution at the namespace scope. It is possible to get into an update deadlock if updating an Operator in one namespace would be an issue for an Operator in another namespace, and vice-versa. 2.4.4.9. Example dependency resolution scenarios In the following examples, a provider is an Operator which "owns" a CRD or API service. Example: Deprecating dependent APIs A and B are APIs (CRDs): The provider of A depends on B. The provider of B has a subscription. The provider of B updates to provide C but deprecates B. This results in: B no longer has a provider. A no longer works. This is a case OLM prevents with its upgrade strategy. Example: Version deadlock A and B are APIs: The provider of A requires B. The provider of B requires A. The provider of A updates to (provide A2, require B2) and deprecate A. The provider of B updates to (provide B2, require A2) and deprecate B. If OLM attempts to update A without simultaneously updating B, or vice-versa, it is unable to progress to new versions of the Operators, even though a new compatible set can be found. This is another case OLM prevents with its upgrade strategy. 2.4.5. Operator groups This guide outlines the use of Operator groups with Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.5.1. About Operator groups An Operator group , defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators. The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments. 2.4.5.2. Operator group membership An Operator is considered a member of an Operator group if the following conditions are true: The CSV of the Operator exists in the same namespace as the Operator group. The install modes in the CSV of the Operator support the set of namespaces targeted by the Operator group. An install mode in a CSV consists of an InstallModeType field and a boolean Supported field. The spec of a CSV can contain a set of install modes of four distinct InstallModeTypes : Table 2.4. Install modes and supported Operator groups InstallModeType Description OwnNamespace The Operator can be a member of an Operator group that selects its own namespace. SingleNamespace The Operator can be a member of an Operator group that selects one namespace. MultiNamespace The Operator can be a member of an Operator group that selects more than one namespace. AllNamespaces The Operator can be a member of an Operator group that selects all namespaces (target namespace set is the empty string "" ). Note If the spec of a CSV omits an entry of InstallModeType , then that type is considered unsupported unless support can be inferred by an existing entry that implicitly supports it. 2.4.5.3. Target namespace selection You can explicitly name the target namespace for an Operator group using the spec.targetNamespaces parameter: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace Warning Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: <operatorgroup_name>-admin <operatorgroup_name>-edit <operatorgroup_name>-view When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. You can alternatively specify a namespace using a label selector with the spec.selector parameter: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: "true" Important Listing multiple namespaces via spec.targetNamespaces or use of a label selector via spec.selector is not recommended, as the support for more than one target namespace in an Operator group will likely be removed in a future release. If both spec.targetNamespaces and spec.selector are defined, spec.selector is ignored. Alternatively, you can omit both spec.selector and spec.targetNamespaces to specify a global Operator group, which selects all namespaces: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace The resolved set of selected namespaces is shown in the status.namespaces parameter of an Opeator group. The status.namespace of a global Operator group contains the empty string ( "" ), which signals to a consuming Operator that it should watch all namespaces. 2.4.5.4. Operator group CSV annotations Member CSVs of an Operator group have the following annotations: Annotation Description olm.operatorGroup=<group_name> Contains the name of the Operator group. olm.operatorNamespace=<group_namespace> Contains the namespace of the Operator group. olm.targetNamespaces=<target_namespaces> Contains a comma-delimited string that lists the target namespace selection of the Operator group. Note All annotations except olm.targetNamespaces are included with copied CSVs. Omitting the olm.targetNamespaces annotation on copied CSVs prevents the duplication of target namespaces between tenants. 2.4.5.5. Provided APIs annotation A group/version/kind (GVK) is a unique identifier for a Kubernetes API. Information about what GVKs are provided by an Operator group are shown in an olm.providedAPIs annotation. The value of the annotation is a string consisting of <kind>.<version>.<group> delimited with commas. The GVKs of CRDs and API services provided by all active member CSVs of an Operator group are included. Review the following example of an OperatorGroup object with a single active member CSV that provides the PackageManifest resource: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local ... spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local 2.4.5.6. Role-based access control When an Operator group is created, three cluster roles are generated. Each contains a single aggregation rule with a cluster role selector set to match a label, as shown below: Cluster role Label to match <operatorgroup_name>-admin olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> <operatorgroup_name>-edit olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> <operatorgroup_name>-view olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> Warning Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: <operatorgroup_name>-admin <operatorgroup_name>-edit <operatorgroup_name>-view When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. The following RBAC resources are generated when a CSV becomes an active member of an Operator group, as long as the CSV is watching all namespaces with the AllNamespaces install mode and is not in a failed state with reason InterOperatorGroupOwnerConflict : Cluster roles for each API resource from a CRD Cluster roles for each API resource from an API service Additional roles and role bindings Table 2.5. Cluster roles generated for each API resource from a CRD Cluster role Settings <kind>.<group>-<version>-admin Verbs on <kind> : * Aggregation labels: rbac.authorization.k8s.io/aggregate-to-admin: true olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> <kind>.<group>-<version>-edit Verbs on <kind> : create update patch delete Aggregation labels: rbac.authorization.k8s.io/aggregate-to-edit: true olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> <kind>.<group>-<version>-view Verbs on <kind> : get list watch Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> <kind>.<group>-<version>-view-crdview Verbs on apiextensions.k8s.io customresourcedefinitions <crd-name> : get Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> Table 2.6. Cluster roles generated for each API resource from an API service Cluster role Settings <kind>.<group>-<version>-admin Verbs on <kind> : * Aggregation labels: rbac.authorization.k8s.io/aggregate-to-admin: true olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> <kind>.<group>-<version>-edit Verbs on <kind> : create update patch delete Aggregation labels: rbac.authorization.k8s.io/aggregate-to-edit: true olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> <kind>.<group>-<version>-view Verbs on <kind> : get list watch Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> Additional roles and role bindings If the CSV defines exactly one target namespace that contains * , then a cluster role and corresponding cluster role binding are generated for each permission defined in the permissions field of the CSV. All resources generated are given the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels. If the CSV does not define exactly one target namespace that contains * , then all roles and role bindings in the Operator namespace with the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels are copied into the target namespace. 2.4.5.7. Copied CSVs OLM creates copies of all active member CSVs of an Operator group in each of the target namespaces of that Operator group. The purpose of a copied CSV is to tell users of a target namespace that a specific Operator is configured to watch resources created there. Copied CSVs have a status reason Copied and are updated to match the status of their source CSV. The olm.targetNamespaces annotation is stripped from copied CSVs before they are created on the cluster. Omitting the target namespace selection avoids the duplication of target namespaces between tenants. Copied CSVs are deleted when their source CSV no longer exists or the Operator group that their source CSV belongs to no longer targets the namespace of the copied CSV. Note By default, the disableCopiedCSVs field is disabled. After enabling a disableCopiedCSVs field, the OLM deletes existing copied CSVs on a cluster. When a disableCopiedCSVs field is disabled, the OLM adds copied CSVs again. Disable the disableCopiedCSVs field: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF Enable the disableCopiedCSVs field: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF 2.4.5.8. Static Operator groups An Operator group is static if its spec.staticProvidedAPIs field is set to true . As a result, OLM does not modify the olm.providedAPIs annotation of an Operator group, which means that it can be set in advance. This is useful when a user wants to use an Operator group to prevent resource contention in a set of namespaces but does not have active member CSVs that provide the APIs for those resources. Below is an example of an Operator group that protects Prometheus resources in all namespaces with the something.cool.io/cluster-monitoring: "true" annotation: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: "true" Warning Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: <operatorgroup_name>-admin <operatorgroup_name>-edit <operatorgroup_name>-view When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. 2.4.5.9. Operator group intersection Two Operator groups are said to have intersecting provided APIs if the intersection of their target namespace sets is not an empty set and the intersection of their provided API sets, defined by olm.providedAPIs annotations, is not an empty set. A potential issue is that Operator groups with intersecting provided APIs can compete for the same resources in the set of intersecting namespaces. Note When checking intersection rules, an Operator group namespace is always included as part of its selected target namespaces. Rules for intersection Each time an active member CSV synchronizes, OLM queries the cluster for the set of intersecting provided APIs between the Operator group of the CSV and all others. OLM then checks if that set is an empty set: If true and the CSV's provided APIs are a subset of the Operator group's: Continue transitioning. If true and the CSV's provided APIs are not a subset of the Operator group's: If the Operator group is static: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs . If the Operator group is not static: Replace the Operator group's olm.providedAPIs annotation with the union of itself and the CSV's provided APIs. If false and the CSV's provided APIs are not a subset of the Operator group's: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason InterOperatorGroupOwnerConflict . If false and the CSV's provided APIs are a subset of the Operator group's: If the Operator group is static: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs . If the Operator group is not static: Replace the Operator group's olm.providedAPIs annotation with the difference between itself and the CSV's provided APIs. Note Failure states caused by Operator groups are non-terminal. The following actions are performed each time an Operator group synchronizes: The set of provided APIs from active member CSVs is calculated from the cluster. Note that copied CSVs are ignored. The cluster set is compared to olm.providedAPIs , and if olm.providedAPIs contains any extra APIs, then those APIs are pruned. All CSVs that provide the same APIs across all namespaces are requeued. This notifies conflicting CSVs in intersecting groups that their conflict has possibly been resolved, either through resizing or through deletion of the conflicting CSV. 2.4.5.10. Limitations for multitenant Operator management OpenShift Container Platform provides limited support for simultaneously installing different versions of an Operator on the same cluster. Operator Lifecycle Manager (OLM) installs Operators multiple times in different namespaces. One constraint of this is that the Operator's API versions must be the same. Operators are control plane extensions due to their usage of CustomResourceDefinition objects (CRDs), which are global resources in Kubernetes. Different major versions of an Operator often have incompatible CRDs. This makes them incompatible to install simultaneously in different namespaces on a cluster. All tenants, or namespaces, share the same control plane of a cluster. Therefore, tenants in a multitenant cluster also share global CRDs, which limits the scenarios in which different instances of the same Operator can be used in parallel on the same cluster. The supported scenarios include the following: Operators of different versions that ship the exact same CRD definition (in case of versioned CRDs, the exact same set of versions) Operators of different versions that do not ship a CRD, and instead have their CRD available in a separate bundle on the OperatorHub All other scenarios are not supported, because the integrity of the cluster data cannot be guaranteed if there are multiple competing or overlapping CRDs from different Operator versions to be reconciled on the same cluster. Additional resources Operator Lifecycle Manager (OLM) Multitenancy and Operator colocation Operators in multitenant clusters Allowing non-cluster administrators to install Operators 2.4.5.11. Troubleshooting Operator groups Membership An install plan's namespace must contain only one Operator group. When attempting to generate a cluster service version (CSV) in a namespace, an install plan considers an Operator group invalid in the following scenarios: No Operator groups exist in the install plan's namespace. Multiple Operator groups exist in the install plan's namespace. An incorrect or non-existent service account name is specified in the Operator group. If an install plan encounters an invalid Operator group, the CSV is not generated and the InstallPlan resource continues to install with a relevant message. For example, the following message is provided if more than one Operator group exists in the same namespace: attenuated service account query failed - more than one operator group(s) are managing this namespace count=2 where count= specifies the number of Operator groups in the namespace. If the install modes of a CSV do not support the target namespace selection of the Operator group in its namespace, the CSV transitions to a failure state with the reason UnsupportedOperatorGroup . CSVs in a failed state for this reason transition to pending after either the target namespace selection of the Operator group changes to a supported configuration, or the install modes of the CSV are modified to support the target namespace selection. 2.4.6. Multitenancy and Operator colocation This guide outlines multitenancy and Operator colocation in Operator Lifecycle Manager (OLM). 2.4.6.1. Colocation of Operators in a namespace Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated. This default behavior manifests in two ways: InstallPlan resources of pending updates include ClusterServiceVersion (CSV) resources of all other Operators that are in the same namespace. All Operators in the same namespace share the same update policy. For example, if one Operator is set to manual updates, all other Operators' update policies are also set to manual. These scenarios can lead to the following issues: It becomes hard to reason about install plans for Operator updates, because there are many more resources defined in them than just the updated Operator. It becomes impossible to have some Operators in a namespace update automatically while other are updated manually, which is a common desire for cluster administrators. These issues usually surface because, when installing Operators with the OpenShift Container Platform web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators global namespace. As a cluster administrator, you can bypass this default behavior manually by using the following workflow: Create a namespace for the installation of the Operator. Create a custom global Operator group , which is an Operator group that watches all namespaces. By associating this Operator group with the namespace you just created, it makes the installation namespace a global namespace, which makes Operators installed there available in all namespaces. Install the desired Operator in the installation namespace. If the Operator has dependencies, the dependencies are automatically installed in the pre-created namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans. For a detailed procedure, see "Installing global Operators in custom namespaces". Additional resources Installing global Operators in custom namespaces Operators in multitenant clusters 2.4.7. Operator conditions This guide outlines how Operator Lifecycle Manager (OLM) uses Operator conditions. 2.4.7.1. About Operator conditions As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator. OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource. Note By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic. 2.4.7.2. Supported conditions Operator Lifecycle Manager (OLM) supports the following Operator conditions. 2.4.7.2.1. Upgradeable condition The Upgradeable Operator condition prevents an existing cluster service version (CSV) from being replaced by a newer version of the CSV. This condition is useful when: An Operator is about to start a critical process and should not be upgraded until the process is completed. An Operator is performing a migration of custom resources (CRs) that must be completed before the Operator is ready to be upgraded. Important Setting the Upgradeable Operator condition to the False value does not avoid pod disruption. If you must ensure your pods are not disrupted, see "Using pod disruption budgets to specify the number of pods that must be up" and "Graceful termination" in the "Additional resources" section. Example Upgradeable Operator condition apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: "False" 2 reason: "migration" message: "The Operator is performing a migration." lastTransitionTime: "2020-08-24T23:15:55Z" 1 Name of the condition. 2 A False value indicates the Operator is not ready to be upgraded. OLM prevents a CSV that replaces the existing CSV of the Operator from leaving the Pending phase. A False value does not block cluster upgrades. 2.4.7.3. Additional resources Managing Operator conditions Enabling Operator conditions Using pod disruption budgets to specify the number of pods that must be up Graceful termination 2.4.8. Operator Lifecycle Manager metrics 2.4.8.1. Exposed metrics Operator Lifecycle Manager (OLM) exposes certain OLM-specific resources for use by the Prometheus-based OpenShift Container Platform cluster monitoring stack. Table 2.7. Metrics exposed by OLM Name Description catalog_source_count Number of catalog sources. catalogsource_ready State of a catalog source. The value 1 indicates that the catalog source is in a READY state. The value of 0 indicates that the catalog source is not in a READY state. csv_abnormal When reconciling a cluster service version (CSV), present whenever a CSV version is in any state other than Succeeded , for example when it is not installed. Includes the name , namespace , phase , reason , and version labels. A Prometheus alert is created when this metric is present. csv_count Number of CSVs successfully registered. csv_succeeded When reconciling a CSV, represents whether a CSV version is in a Succeeded state (value 1 ) or not (value 0 ). Includes the name , namespace , and version labels. csv_upgrade_count Monotonic count of CSV upgrades. install_plan_count Number of install plans. installplan_warnings_total Monotonic count of warnings generated by resources, such as deprecated resources, included in an install plan. olm_resolution_duration_seconds The duration of a dependency resolution attempt. subscription_count Number of subscriptions. subscription_sync_total Monotonic count of subscription syncs. Includes the channel , installed CSV, and subscription name labels. 2.4.9. Webhook management in Operator Lifecycle Manager Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator. See Defining cluster service versions (CSVs) for details on how an Operator developer can define webhooks for their Operator, as well as considerations when running on OLM. 2.4.9.1. Additional resources Types of webhook admission plugins Kubernetes documentation: Validating admission webhooks Mutating admission webhooks Conversion webhooks 2.5. Understanding OperatorHub 2.5.1. About OperatorHub OperatorHub is the web console interface in OpenShift Container Platform that cluster administrators use to discover and install Operators. With one click, an Operator can be pulled from its off-cluster source, installed and subscribed on the cluster, and made ready for engineering teams to self-service manage the product across deployment environments using Operator Lifecycle Manager (OLM). Cluster administrators can choose from catalogs grouped into the following categories: Category Description Red Hat Operators Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. Certified Operators Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. Red Hat Marketplace Certified software that can be purchased from Red Hat Marketplace . Community Operators Optionally-visible software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. Custom Operators Operators you add to the cluster yourself. If you have not added any custom Operators, the Custom category does not appear in the web console on your OperatorHub. Operators on OperatorHub are packaged to run on OLM. This includes a YAML file called a cluster service version (CSV) containing all of the CRDs, RBAC rules, deployments, and container images required to install and securely run the Operator. It also contains user-visible information like a description of its features and supported Kubernetes versions. The Operator SDK can be used to assist developers packaging their Operators for use on OLM and OperatorHub. If you have a commercial application that you want to make accessible to your customers, get it included using the certification workflow provided on the Red Hat Partner Connect portal at connect.redhat.com . 2.5.2. OperatorHub architecture The OperatorHub UI component is driven by the Marketplace Operator by default on OpenShift Container Platform in the openshift-marketplace namespace. 2.5.2.1. OperatorHub custom resource The Marketplace Operator manages an OperatorHub custom resource (CR) named cluster that manages the default CatalogSource objects provided with OperatorHub. You can modify this resource to enable or disable the default catalogs, which is useful when configuring OpenShift Container Platform in restricted network environments. Example OperatorHub custom resource apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: "community-operators", disabled: false } ] 1 disableAllDefaultSources is an override that controls availability of all default catalogs that are configured by default during an OpenShift Container Platform installation. 2 Disable default catalogs individually by changing the disabled parameter value per source. 2.5.3. Additional resources Catalog source About the Operator SDK Defining cluster service versions (CSVs) Operator installation and upgrade workflow in OLM Red Hat Partner Connect Red Hat Marketplace 2.6. Red Hat-provided Operator catalogs Red Hat provides several Operator catalogs that are included with OpenShift Container Platform by default. Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs , Operator Framework packaging format , and Mirroring images for a disconnected installation using the oc-mirror plugin . 2.6.1. About Operator catalogs An Operator catalog is a repository of metadata that Operator Lifecycle Manager (OLM) can query to discover and install Operators and their dependencies on a cluster. OLM always installs Operators from the latest version of a catalog. An index image, based on the Operator bundle format, is a containerized snapshot of a catalog. It is an immutable artifact that contains the database of pointers to a set of Operator manifest content. A catalog can reference an index image to source its content for OLM on the cluster. As catalogs are updated, the latest versions of Operators change, and older versions may be removed or altered. In addition, when OLM runs on an OpenShift Container Platform cluster in a restricted network environment, it is unable to access the catalogs directly from the internet to pull the latest content. As a cluster administrator, you can create your own custom index image, either based on a Red Hat-provided catalog or from scratch, which can be used to source the catalog content on the cluster. Creating and updating your own index image provides a method for customizing the set of Operators available on the cluster, while also avoiding the aforementioned restricted network environment issues. Important Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API. If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades. Note Support for the legacy package manifest format for Operators, including custom catalogs that were using the legacy format, is removed in OpenShift Container Platform 4.8 and later. When creating custom catalog images, versions of OpenShift Container Platform 4 required using the oc adm catalog build command, which was deprecated for several releases and is now removed. With the availability of Red Hat-provided index images starting in OpenShift Container Platform 4.6, catalog builders must use the opm index command to manage index images. Additional resources Managing custom catalogs Packaging format Using Operator Lifecycle Manager on restricted networks 2.6.2. About Red Hat-provided Operator catalogs The Red Hat-provided catalog sources are installed by default in the openshift-marketplace namespace, which makes the catalogs available cluster-wide in all namespaces. The following Operator catalogs are distributed by Red Hat: Catalog Index image Description redhat-operators registry.redhat.io/redhat/redhat-operator-index:v4.13 Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. certified-operators registry.redhat.io/redhat/certified-operator-index:v4.13 Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. redhat-marketplace registry.redhat.io/redhat/redhat-marketplace-index:v4.13 Certified software that can be purchased from Red Hat Marketplace . community-operators registry.redhat.io/redhat/community-operator-index:v4.13 Software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.8 to 4.9, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from: registry.redhat.io/redhat/redhat-operator-index:v4.8 to: registry.redhat.io/redhat/redhat-operator-index:v4.9 2.7. Operators in multitenant clusters The default behavior for Operator Lifecycle Manager (OLM) aims to provide simplicity during Operator installation. However, this behavior can lack flexibility, especially in multitenant clusters. In order for multiple tenants on a OpenShift Container Platform cluster to use an Operator, the default behavior of OLM requires that administrators install the Operator in All namespaces mode, which can be considered to violate the principle of least privilege. Consider the following scenarios to determine which Operator installation workflow works best for your environment and requirements. Additional resources Common terms: Multitenant Limitations for multitenant Operator management 2.7.1. Default Operator install modes and behavior When installing Operators with the web console as an administrator, you typically have two choices for the install mode, depending on the Operator's capabilities: Single namespace Installs the Operator in the chosen single namespace, and makes all permissions that the Operator requests available in that namespace. All namespaces Installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. Makes all permissions that the Operator requests available in all namespaces. In some cases, an Operator author can define metadata to give the user a second option for that Operator's suggested namespace. This choice also means that users in the affected namespaces get access to the Operators APIs, which can leverage the custom resources (CRs) they own, depending on their role in the namespace: The namespace-admin and namespace-edit roles can read/write to the Operator APIs, meaning they can use them. The namespace-view role can read CR objects of that Operator. For Single namespace mode, because the Operator itself installs in the chosen namespace, its pod and service account are also located there. For All namespaces mode, the Operator's privileges are all automatically elevated to cluster roles, meaning the Operator has those permissions in all namespaces. Additional resources Adding Operators to a cluster Install modes types Setting a suggested namespace 2.7.2. Recommended solution for multitenant clusters While a Multinamespace install mode does exist, it is supported by very few Operators. As a middle ground solution between the standard All namespaces and Single namespace install modes, you can install multiple instances of the same Operator, one for each tenant, by using the following workflow: Create a namespace for the tenant Operator that is separate from the tenant's namespace. Create an Operator group for the tenant Operator scoped only to the tenant's namespace. Install the Operator in the tenant Operator namespace. As a result, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator's pod nor its service account are visible or usable by the tenant. This solution provides better tenant separation, least privilege principle at the cost of resource usage, and additional orchestration to ensure the constraints are met. For a detailed procedure, see "Preparing for multiple instances of an Operator for multitenant clusters". Limitations and considerations This solution only works when the following constraints are met: All instances of the same Operator must be the same version. The Operator cannot have dependencies on other Operators. The Operator cannot ship a CRD conversion webhook. Important You cannot use different versions of the same Operator on the same cluster. Eventually, the installation of another instance of the Operator would be blocked when it meets the following conditions: The instance is not the newest version of the Operator. The instance ships an older revision of the CRDs that lack information or versions that newer revisions have that are already in use on the cluster. Warning As an administrator, use caution when allowing non-cluster administrators to install Operators self-sufficiently, as explained in "Allowing non-cluster administrators to install Operators". These tenants should only have access to a curated catalog of Operators that are known to not have dependencies. These tenants must also be forced to use the same version line of an Operator, to ensure the CRDs do not change. This requires the use of namespace-scoped catalogs and likely disabling the global default catalogs. Additional resources Preparing for multiple instances of an Operator for multitenant clusters Allowing non-cluster administrators to install Operators Disabling the default OperatorHub catalog sources 2.7.3. Operator colocation and Operator groups Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated. For more information on Operator colocation and using Operator groups effectively, see Operator Lifecycle Manager (OLM) Multitenancy and Operator colocation . 2.8. CRDs 2.8.1. Extending the Kubernetes API with custom resource definitions Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so that custom objects managed by the Operator look and act just like the built-in, native Kubernetes objects. This guide describes how cluster administrators can extend their OpenShift Container Platform cluster by creating and managing CRDs. 2.8.1.1. Custom resource definitions In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects. A custom resource definition (CRD) object defines a new, unique object type, called a kind , in the cluster and lets the Kubernetes API server handle its entire lifecycle. Custom resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects. When a cluster administrator adds a new CRD to the cluster, the Kubernetes API server reacts by creating a new RESTful resource path that can be accessed by the entire cluster or a single project (namespace) and begins serving the specified CR. Cluster administrators that want to grant access to the CRD to other users can use cluster role aggregation to grant access to users with the admin , edit , or view default cluster roles. Cluster role aggregation allows the insertion of custom policy rules into these cluster roles. This behavior integrates the new resource into the RBAC policy of the cluster as if it was a built-in resource. Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. Cluster administrators can also add CRDs manually to the cluster outside of the lifecycle of an Operator, making them available to all users. Note While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it. 2.8.1.2. Creating a custom resource definition To create custom resource (CR) objects, cluster administrators must first create a custom resource definition (CRD). Prerequisites Access to an OpenShift Container Platform cluster with cluster-admin user privileges. Procedure To create a CRD: Create a YAML file that contains the following field types: Example YAML file for a CRD apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: name: v1 4 scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9 1 Use the apiextensions.k8s.io/v1 API. 2 Specify a name for the definition. This must be in the <plural-name>.<group> format using the values from the group and plural fields. 3 Specify a group name for the API. An API group is a collection of objects that are logically related. For example, all batch objects like Job or ScheduledJob could be in the batch API group (such as batch.api.example.com ). A good practice is to use a fully-qualified-domain name (FQDN) of your organization. 4 Specify a version name to be used in the URL. Each API group can exist in multiple versions, for example v1alpha , v1beta , v1 . 5 Specify whether the custom objects are available to a project ( Namespaced ) or all projects in the cluster ( Cluster ). 6 Specify the plural name to use in the URL. The plural field is the same as a resource in an API URL. 7 Specify a singular name to use as an alias on the CLI and for display. 8 Specify the kind of objects that can be created. The type can be in CamelCase. 9 Specify a shorter string to match your resource on the CLI. Note By default, a CRD is cluster-scoped and available to all projects. Create the CRD object: USD oc create -f <file_name>.yaml A new RESTful API endpoint is created at: /apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/... For example, using the example file, the following endpoint is created: /apis/stable.example.com/v1/namespaces/*/crontabs/... You can now use this endpoint URL to create and manage CRs. The object kind is based on the spec.kind field of the CRD object you created. 2.8.1.3. Creating cluster roles for custom resource definitions Cluster administrators can grant permissions to existing cluster-scoped custom resource definitions (CRDs). If you use the admin , edit , and view default cluster roles, you can take advantage of cluster role aggregation for their rules. Important You must explicitly assign permissions to each of these roles. The roles with more permissions do not inherit rules from roles with fewer permissions. If you assign a rule to a role, you must also assign that verb to roles that have more permissions. For example, if you grant the get crontabs permission to the view role, you must also grant it to the edit and admin roles. The admin or edit role is usually assigned to the user that created a project through the project template. Prerequisites Create a CRD. Procedure Create a cluster role definition file for the CRD. The cluster role definition is a YAML file that contains the rules that apply to each cluster role. An OpenShift Container Platform controller adds the rules that you specify to the default cluster roles. Example YAML file for a cluster role definition kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" 3 rbac.authorization.k8s.io/aggregate-to-edit: "true" 4 rules: - apiGroups: ["stable.example.com"] 5 resources: ["crontabs"] 6 verbs: ["get", "list", "watch", "create", "update", "patch", "delete", "deletecollection"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the "view" default role. rbac.authorization.k8s.io/aggregate-to-view: "true" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: "true" 10 rules: - apiGroups: ["stable.example.com"] 11 resources: ["crontabs"] 12 verbs: ["get", "list", "watch"] 13 1 Use the rbac.authorization.k8s.io/v1 API. 2 8 Specify a name for the definition. 3 Specify this label to grant permissions to the admin default role. 4 Specify this label to grant permissions to the edit default role. 5 11 Specify the group name of the CRD. 6 12 Specify the plural name of the CRD that these rules apply to. 7 13 Specify the verbs that represent the permissions that are granted to the role. For example, apply read and write permissions to the admin and edit roles and only read permission to the view role. 9 Specify this label to grant permissions to the view default role. 10 Specify this label to grant permissions to the cluster-reader default role. Create the cluster role: USD oc create -f <file_name>.yaml 2.8.1.4. Creating custom resources from a file After a custom resource definition (CRD) has been added to the cluster, custom resources (CRs) can be created with the CLI from a file using the CR specification. Prerequisites CRD added to the cluster by a cluster administrator. Procedure Create a YAML file for the CR. In the following example definition, the cronSpec and image custom fields are set in a CR of Kind: CronTab . The Kind comes from the spec.kind field of the CRD object: Example YAML file for a CR apiVersion: "stable.example.com/v1" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: "* * * * /5" image: my-awesome-cron-image 1 Specify the group name and API version (name/version) from the CRD. 2 Specify the type in the CRD. 3 Specify a name for the object. 4 Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted. 5 Specify conditions specific to the type of object. After you create the file, create the object: USD oc create -f <file_name>.yaml 2.8.1.5. Inspecting custom resources You can inspect custom resource (CR) objects that exist in your cluster using the CLI. Prerequisites A CR object exists in a namespace to which you have access. Procedure To get information on a specific kind of a CR, run: USD oc get <kind> For example: USD oc get crontab Example output NAME KIND my-new-cron-object CronTab.v1.stable.example.com Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example: USD oc get crontabs USD oc get crontab USD oc get ct You can also view the raw YAML data for a CR: USD oc get <kind> -o yaml For example: USD oc get ct -o yaml Example output apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: "" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: "285" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2 1 2 Custom data from the YAML that you used to create the object displays. 2.8.2. Managing resources from custom resource definitions This guide describes how developers can manage custom resources (CRs) that come from custom resource definitions (CRDs). 2.8.2.1. Custom resource definitions In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects. A custom resource definition (CRD) object defines a new, unique object type, called a kind , in the cluster and lets the Kubernetes API server handle its entire lifecycle. Custom resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects. Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. Cluster administrators can also add CRDs manually to the cluster outside of the lifecycle of an Operator, making them available to all users. Note While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it. 2.8.2.2. Creating custom resources from a file After a custom resource definition (CRD) has been added to the cluster, custom resources (CRs) can be created with the CLI from a file using the CR specification. Prerequisites CRD added to the cluster by a cluster administrator. Procedure Create a YAML file for the CR. In the following example definition, the cronSpec and image custom fields are set in a CR of Kind: CronTab . The Kind comes from the spec.kind field of the CRD object: Example YAML file for a CR apiVersion: "stable.example.com/v1" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: "* * * * /5" image: my-awesome-cron-image 1 Specify the group name and API version (name/version) from the CRD. 2 Specify the type in the CRD. 3 Specify a name for the object. 4 Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted. 5 Specify conditions specific to the type of object. After you create the file, create the object: USD oc create -f <file_name>.yaml 2.8.2.3. Inspecting custom resources You can inspect custom resource (CR) objects that exist in your cluster using the CLI. Prerequisites A CR object exists in a namespace to which you have access. Procedure To get information on a specific kind of a CR, run: USD oc get <kind> For example: USD oc get crontab Example output NAME KIND my-new-cron-object CronTab.v1.stable.example.com Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example: USD oc get crontabs USD oc get crontab USD oc get ct You can also view the raw YAML data for a CR: USD oc get <kind> -o yaml For example: USD oc get ct -o yaml Example output apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: "" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: "285" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2 1 2 Custom data from the YAML that you used to create the object displays.
[ "etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml", "annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6", "dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml", "catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json", "_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }", "#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }", "#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }", "#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }", "#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }", "#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }", "#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }", "#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }", "name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317", "name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm alpha generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"", "apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain", "oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF", "bundle.core.rukpak.io/combo-tag-ref created", "oc patch bundle combo-tag-ref --type='merge' -p '{\"spec\":{\"source\":{\"git\":{\"ref\":{\"tag\":\"v0.0.3\"}}}}}'", "Error from server (bundle.spec is immutable): admission webhook \"vbundles.core.rukpak.io\" denied the request: bundle.spec is immutable", "manifests ├── namespace.yaml ├── cluster_role.yaml ├── role.yaml ├── serviceaccount.yaml ├── cluster_role_binding.yaml ├── role_binding.yaml └── deployment.yaml", "apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain", "\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace", "registry.redhat.io/redhat/redhat-operator-index:v4.12", "registry.redhat.io/redhat/redhat-operator-index:v4.13", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.26 priority: -400 publisher: Example Org", "quay.io/example-org/example-catalog:v1.26", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace", "apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created", "packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1", "olm.skipRange: <semver_range>", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'", "properties: - type: olm.kubeversion value: version: \"1.16.0\"", "properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource", "dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'", "type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue", "apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100", "dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"", "attenuated service account query failed - more than one operator group(s) are managing this namespace count=2", "apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"", "apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: \"community-operators\", disabled: false } ]", "registry.redhat.io/redhat/redhat-operator-index:v4.8", "registry.redhat.io/redhat/redhat-operator-index:v4.9", "apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: name: v1 4 scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9", "oc create -f <file_name>.yaml", "/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/", "/apis/stable.example.com/v1/namespaces/*/crontabs/", "kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: \"true\" 3 rbac.authorization.k8s.io/aggregate-to-edit: \"true\" 4 rules: - apiGroups: [\"stable.example.com\"] 5 resources: [\"crontabs\"] 6 verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the \"view\" default role. rbac.authorization.k8s.io/aggregate-to-view: \"true\" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: \"true\" 10 rules: - apiGroups: [\"stable.example.com\"] 11 resources: [\"crontabs\"] 12 verbs: [\"get\", \"list\", \"watch\"] 13", "oc create -f <file_name>.yaml", "apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image", "oc create -f <file_name>.yaml", "oc get <kind>", "oc get crontab", "NAME KIND my-new-cron-object CronTab.v1.stable.example.com", "oc get crontabs", "oc get crontab", "oc get ct", "oc get <kind> -o yaml", "oc get ct -o yaml", "apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2", "apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image", "oc create -f <file_name>.yaml", "oc get <kind>", "oc get crontab", "NAME KIND my-new-cron-object CronTab.v1.stable.example.com", "oc get crontabs", "oc get crontab", "oc get ct", "oc get <kind> -o yaml", "oc get ct -o yaml", "apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operators/understanding-operators
1.5. Key Management
1.5. Key Management Before a certificate can be issued, the public key it contains and the corresponding private key must be generated. Sometimes it may be useful to issue a single person one certificate and key pair for signing operations and another certificate and key pair for encryption operations. Separate signing and encryption certificates keep the private signing key only on the local machine, providing maximum nonrepudiation. This also aids in backing up the private encryption key in some central location where it can be retrieved in case the user loses the original key or leaves the company. Keys can be generated by client software or generated centrally by the CA and distributed to users through an LDAP directory. There are costs associated with either method. Local key generation provides maximum nonrepudiation but may involve more participation by the user in the issuing process. Flexible key management capabilities are essential for most organizations. Key recovery , or the ability to retrieve backups of encryption keys under carefully defined conditions, can be a crucial part of certificate management, depending on how an organization uses certificates. In some PKI setups, several authorized personnel must agree before an encryption key can be recovered to ensure that the key is only recovered to the legitimate owner in authorized circumstance. It can be necessary to recover a key when information is encrypted and can only be decrypted by the lost key.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/Managing_Certificates-Key_Management
Chapter 1. Notification of name change to Streams for Apache Kafka
Chapter 1. Notification of name change to Streams for Apache Kafka AMQ Streams is being renamed as streams for Apache Kafka as part of a branding effort. This change aims to increase awareness among customers of Red Hat's product for Apache Kafka. During this transition period, you may encounter references to the old name, AMQ Streams. We are actively working to update our documentation, resources, and media to reflect the new name.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/release_notes_for_streams_for_apache_kafka_2.7_on_rhel/ref-name-change-str
2.3. Storage Concepts
2.3. Storage Concepts Following are the common terms relating to file systems and storage used throughout the Red Hat Gluster Storage Administration Guide . Brick The glusterFS basic unit of storage, represented by an export directory on a server in the trusted storage pool. A brick is expressed by combining a server with an export directory in the following format: SERVER:EXPORT For example: myhostname : /exports/myexportdir/ Volume A volume is a logical collection of bricks. Most of the Red Hat Gluster Storage management operations happen on the volume. Translator A translator connects to one or more subvolumes, does something with them, and offers a subvolume connection. Subvolume A brick after being processed by at least one translator. Volfile Volume (vol) files are configuration files that determine the behavior of your Red Hat Gluster Storage trusted storage pool. At a high level, GlusterFS has three entities, that is, Server, Client and Management daemon. Each of these entities have their own volume files. Volume files for servers and clients are generated by the management daemon upon creation of a volume. Server and Client Vol files are located in /var/lib/glusterd/vols/VOLNAME directory. The management daemon vol file is named as glusterd.vol and is located in /etc/glusterfs/ directory. Warning You must not modify any vol file in /var/lib/glusterd manually as Red Hat does not support vol files that are not generated by the management daemon. glusterd glusterd is the glusterFS Management Service that must run on all servers in the trusted storage pool. Cluster A trusted pool of linked computers working together, resembling a single computing resource. In Red Hat Gluster Storage, a cluster is also referred to as a trusted storage pool. Client The machine that mounts a volume (this may also be a server). File System A method of storing and organizing computer files. A file system organizes files into a database for the storage, manipulation, and retrieval by the computer's operating system. Source: Wikipedia Distributed File System A file system that allows multiple clients to concurrently access data which is spread across servers/bricks in a trusted storage pool. Data sharing among multiple locations is fundamental to all distributed file systems. Virtual File System (VFS) VFS is a kernel software layer that handles all system calls related to the standard Linux file system. It provides a common interface to several kinds of file systems. POSIX Portable Operating System Interface (for Unix) (POSIX) is the name of a family of related standards specified by the IEEE to define the application programming interface (API), as well as shell and utilities interfaces, for software that is compatible with variants of the UNIX operating system. Red Hat Gluster Storage exports a fully POSIX compatible file system. Metadata Metadata is data providing information about other pieces of data. FUSE Filesystem in User space ( FUSE ) is a loadable kernel module for Unix-like operating systems that lets non-privileged users create their own file systems without editing kernel code. This is achieved by running file system code in user space while the FUSE module provides only a "bridge" to the kernel interfaces. Source: Wikipedia Geo-Replication Geo-replication provides a continuous, asynchronous, and incremental replication service from one site to another over Local Area Networks ( LAN ), Wide Area Networks ( WAN ), and the Internet. N-way Replication Local synchronous data replication that is typically deployed across campus or Amazon Web Services Availability Zones. Petabyte A petabyte is a unit of information equal to one quadrillion bytes, or 1000 terabytes. The unit symbol for the petabyte is PB. The prefix peta- (P) indicates a power of 1000: 1 PB = 1,000,000,000,000,000 B = 1000^5 B = 10^15 B. The term "pebibyte" ( PiB ), using a binary prefix, is used for the corresponding power of 1024. Source: Wikipedia RAID Redundant Array of Independent Disks ( RAID ) is a technology that provides increased storage reliability through redundancy. It combines multiple low-cost, less-reliable disk drives components into a logical unit where all drives in the array are interdependent. RRDNS Round Robin Domain Name Service ( RRDNS ) is a method to distribute load across application servers. RRDNS is implemented by creating multiple records with the same name and different IP addresses in the zone file of a DNS server. Server The machine (virtual or bare metal) that hosts the file system in which data is stored. Block Storage Block special files, or block devices, correspond to devices through which the system moves data in the form of blocks. These device nodes often represent addressable devices such as hard disks, CD-ROM drives, or memory regions. As of Red Hat Gluster Storage 3.4 and later, block storage supports only OpenShift Container Storage converged and independent mode use cases. Block storage can be created and configured for this use case by using the gluster-block command line tool. For more information, see Container-Native Storage for OpenShift Container Platform . Scale-Up Storage Increases the capacity of the storage device in a single dimension. For example, adding additional disk capacity in a trusted storage pool. Scale-Out Storage Increases the capability of a storage device in single dimension. For example, adding more systems of the same size, or adding servers to a trusted storage pool that increases CPU, disk capacity, and throughput for the trusted storage pool. Trusted Storage Pool A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of only that server. Namespace An abstract container or environment that is created to hold a logical grouping of unique identifiers or symbols. Each Red Hat Gluster Storage trusted storage pool exposes a single namespace as a POSIX mount point which contains every file in the trusted storage pool. User Space Applications running in user space do not directly interact with hardware, instead using the kernel to moderate access. User space applications are generally more portable than applications in kernel space. glusterFS is a user space application. Distributed Hash Table Terminology Hashed subvolume A Distributed Hash Table Translator subvolume to which the file or directory name is hashed to. Cached subvolume A Distributed Hash Table Translator subvolume where the file content is actually present. For directories, the concept of cached-subvolume is not relevant. It is loosely used to mean subvolumes which are not hashed-subvolume. Linkto-file For a newly created file, the hashed and cached subvolumes are the same. When directory entry operations like rename (which can change the name and hence hashed subvolume of the file) are performed on the file, instead of moving the entire data in the file to a new hashed subvolume, a file is created with the same name on the newly hashed subvolume. The purpose of this file is only to act as a pointer to the node where the data is present. In the extended attributes of this file, the name of the cached subvolume is stored. This file on the newly hashed-subvolume is called a linkto-file. The linkto file is relevant only for non-directory entities. Directory Layout The directory layout helps determine where files in a gluster volume are stored. When a client creates or requests a file, the DHT translator hashes the file's path to create an integer. Each directory in a gluster subvolume holds files that have integers in a specific range, so the hash of any given file maps to a specific subvolume in the gluster volume. The directory layout determines which integer ranges are assigned to a given directory across all subvolumes. Directory layouts are assigned when a directory is first created, and can be reassigned by running a rebalance operation on the volume. If a brick or subvolume is offline when a directory is created, it will not be part of the layout until after a rebalance is run. You should rebalance a volume to recalculate its directory layout after bricks are added to the volume. See Section 11.11, "Rebalancing Volumes" for more information. Fix Layout A command that is executed during the rebalance process. The rebalance process itself comprises of two stages: Fixes the layouts of directories to accommodate any subvolumes that are added or removed. It also heals the directories, checks whether the layout is non-contiguous, and persists the layout in extended attributes, if needed. It also ensures that the directories have the same attributes across all the subvolumes. Migrates the data from the cached-subvolume to the hashed-subvolume.
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/storage_concepts
A.2. Authentication Methods
A.2. Authentication Methods Anonymous SASL Mechanism ( RFC 4505 ) Not supported. Note that RFC 4512 does not require the ANONYMOUS SASL mechanism. However, Directory Server support LDAP anonymous binds. External SASL Mechanism ( RFC 4422 ) Supported. Plain SASL Mechanism ( RFC 4616 ) Not supported. Note that RFC 4512 does not require the PLAIN SASL mechanism. However, Directory Server support LDAP anonymous binds. SecurID SASL Mechanism ( RFC 2808 ) Not supported. However if a Cyrus SASL plug-in exists, Directory Server is able to use it. Kerberos V5 (GSSAPI) SASL Mechanism ( RFC 4752 ) Supported. CRAM-MD5 SASL Mechanism ( RFC 2195 ) Supported. Digest-MD5 SASL Mechanism ( RFC 2831 ) Supported. One-time Password SASL Mechanism ( RFC 2444 ) Not supported. However if a Cyrus SASL plug-in exists, Directory Server is able to use it.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/auth-method-rfc-support
Chapter 7. Available BPF Features
Chapter 7. Available BPF Features This chapter provides the complete list of Berkeley Packet Filter ( BPF ) features available in the kernel of this minor version of Red Hat Enterprise Linux 8. The tables include the lists of: System configuration and other options Available program types and supported helpers Available map types This chapter contains automatically generated output of the bpftool feature command. Table 7.1. System configuration and other options Option Value unprivileged_bpf_disabled 1 (bpf() syscall restricted to privileged users, without recovery) JIT compiler 1 (enabled) JIT compiler hardening 1 (enabled for unprivileged users) JIT compiler kallsyms exports 1 (enabled for root) Memory limit for JIT for unprivileged users 264241152 CONFIG_BPF y CONFIG_BPF_SYSCALL y CONFIG_HAVE_EBPF_JIT y CONFIG_BPF_JIT y CONFIG_BPF_JIT_ALWAYS_ON y CONFIG_DEBUG_INFO_BTF y CONFIG_DEBUG_INFO_BTF_MODULES n CONFIG_CGROUPS y CONFIG_CGROUP_BPF y CONFIG_CGROUP_NET_CLASSID y CONFIG_SOCK_CGROUP_DATA y CONFIG_BPF_EVENTS y CONFIG_KPROBE_EVENTS y CONFIG_UPROBE_EVENTS y CONFIG_TRACING y CONFIG_FTRACE_SYSCALLS y CONFIG_FUNCTION_ERROR_INJECTION y CONFIG_BPF_KPROBE_OVERRIDE y CONFIG_NET y CONFIG_XDP_SOCKETS y CONFIG_LWTUNNEL_BPF y CONFIG_NET_ACT_BPF m CONFIG_NET_CLS_BPF m CONFIG_NET_CLS_ACT y CONFIG_NET_SCH_INGRESS m CONFIG_XFRM y CONFIG_IP_ROUTE_CLASSID y CONFIG_IPV6_SEG6_BPF n CONFIG_BPF_LIRC_MODE2 n CONFIG_BPF_STREAM_PARSER y CONFIG_NETFILTER_XT_MATCH_BPF m CONFIG_BPFILTER n CONFIG_BPFILTER_UMH n CONFIG_TEST_BPF m CONFIG_HZ 1000 bpf() syscall available Large program size limit available Table 7.2. Available program types and supported helpers Program type Available helpers socket_filter bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_load_bytes_relative, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf kprobe bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_override_return, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf sched_cls bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_skb_adjust_room, bpf_skb_get_xfrm_state, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_skb_cgroup_id, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf sched_act bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_skb_adjust_room, bpf_skb_get_xfrm_state, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_skb_cgroup_id, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf tracepoint bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf xdp bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_redirect, bpf_perf_event_output, bpf_csum_diff, bpf_get_current_task, bpf_get_numa_node_id, bpf_xdp_adjust_head, bpf_redirect_map, bpf_xdp_adjust_meta, bpf_xdp_adjust_tail, bpf_fib_lookup, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf perf_event bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_perf_prog_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_read_branch_records, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_skb bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_load_bytes_relative, bpf_skb_cgroup_id, bpf_get_local_storage, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_sk_cgroup_id, bpf_sk_ancestor_cgroup_id, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_sock bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_storage_get, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lwt_in bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_lwt_push_encap, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lwt_out bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lwt_xmit bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_lwt_push_encap, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf sock_ops bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_setsockopt, bpf_sock_map_update, bpf_getsockopt, bpf_sock_ops_cb_flags_set, bpf_sock_hash_update, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_tcp_sock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_load_hdr_opt, bpf_store_hdr_opt, bpf_reserve_hdr_opt, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf sk_skb bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_adjust_room, bpf_sk_redirect_map, bpf_sk_redirect_hash, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_device bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf sk_msg bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_msg_redirect_map, bpf_msg_apply_bytes, bpf_msg_cork_bytes, bpf_msg_pull_data, bpf_msg_redirect_hash, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_msg_push_data, bpf_msg_pop_data, bpf_spin_lock, bpf_spin_unlock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf raw_tracepoint bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_sock_addr bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_setsockopt, bpf_getsockopt, bpf_bind, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lwt_seg6local bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lirc_mode2 not supported sk_reuseport bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_skb_load_bytes_relative, bpf_sk_select_reuseport, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf flow_dissector bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_sysctl bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sysctl_get_name, bpf_sysctl_get_current_value, bpf_sysctl_get_new_value, bpf_sysctl_set_new_value, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf raw_tracepoint_writable bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_sockopt bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_tcp_sock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf tracing not supported struct_ops bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_perf_event_read, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_stackid, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_xdp_adjust_head, bpf_probe_read_str, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_setsockopt, bpf_skb_adjust_room, bpf_redirect_map, bpf_sk_redirect_map, bpf_sock_map_update, bpf_xdp_adjust_meta, bpf_perf_event_read_value, bpf_perf_prog_read_value, bpf_getsockopt, bpf_override_return, bpf_sock_ops_cb_flags_set, bpf_msg_redirect_map, bpf_msg_apply_bytes, bpf_msg_cork_bytes, bpf_msg_pull_data, bpf_bind, bpf_xdp_adjust_tail, bpf_skb_get_xfrm_state, bpf_get_stack, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_sock_hash_update, bpf_msg_redirect_hash, bpf_sk_redirect_hash, bpf_lwt_push_encap, bpf_lwt_seg6_store_bytes, bpf_lwt_seg6_adjust_srh, bpf_lwt_seg6_action, bpf_rc_repeat, bpf_rc_keydown, bpf_skb_cgroup_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_sk_select_reuseport, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_msg_push_data, bpf_msg_pop_data, bpf_rc_pointer_rel, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sysctl_get_name, bpf_sysctl_get_current_value, bpf_sysctl_get_new_value, bpf_sysctl_set_new_value, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_send_signal, bpf_tcp_gen_syncookie, bpf_skb_output, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_tcp_send_ack, bpf_send_signal_thread, bpf_jiffies64, bpf_read_branch_records, bpf_get_ns_current_pid_tgid, bpf_xdp_output, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_seq_printf, bpf_seq_write, bpf_sk_cgroup_id, bpf_sk_ancestor_cgroup_id, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_get_task_stack, bpf_load_hdr_opt, bpf_store_hdr_opt, bpf_reserve_hdr_opt, bpf_inode_storage_get, bpf_inode_storage_delete, bpf_d_path, bpf_copy_from_user, bpf_snprintf_btf, bpf_seq_printf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_bprm_opts_set, bpf_ktime_get_coarse_ns, bpf_ima_inode_hash, bpf_sock_from_file, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_sys_bpf, bpf_btf_find_by_name_kind, bpf_sys_close ext not supported lsm not supported sk_lookup bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf Table 7.3. Available map types Map type Available hash yes array yes prog_array yes perf_event_array yes percpu_hash yes percpu_array yes stack_trace yes cgroup_array yes lru_hash yes lru_percpu_hash yes lpm_trie yes array_of_maps yes hash_of_maps yes devmap yes sockmap yes cpumap yes xskmap yes sockhash yes cgroup_storage yes reuseport_sockarray yes percpu_cgroup_storage yes queue yes stack yes sk_storage yes devmap_hash yes struct_ops no ringbuf yes inode_storage yes task_storage no
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.8_release_notes/available_bpf_features
Chapter 7. Uninstalling the Ansible plug-ins from a Helm installation on OpenShift Container Platform
Chapter 7. Uninstalling the Ansible plug-ins from a Helm installation on OpenShift Container Platform To uninstall the Ansible plug-ins, you must remove any software templates that use the ansible:content:create action from Red Hat Developer Hub, and remove the plug-ins configuration from the Helm chart in OpenShift. 7.1. Uninstalling a Helm chart installation Procedure In Red Hat Developer Hub, remove any software templates that use the ansible:content:create action. In the OpenShift Developer UI, navigate to Helm developer-hub Actions Upgrade Yaml view . Remove the Ansible plug-ins configuration under the plugins section. ... global: ... plugins: - disabled: false integrity: <SHA512 value> package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-x.y.z.tgz' pluginConfig: dynamicPlugins: frontend: ansible.plugin-backstage-rhaap: appIcons: - importName: AnsibleLogo name: AnsibleLogo dynamicRoutes: - importName: AnsiblePage menuItem: icon: AnsibleLogo text: Ansible path: /ansible - disabled: false integrity: <SHA512 value> package: >- http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-scaffolder-backend-module-backstage-rhaap: null - disabled: false integrity: <SHA512 value> package: >- http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-backstage-rhaap-backend: null Remove the extraContainers section. upstream: backstage: | ... extraContainers: - command: - adt - server image: >- registry.redhat.io/ansible-automation-platform-25/ansible-dev-tools-rhel8:latest imagePullPolicy: IfNotPresent name: ansible-devtools-server ports: - containerPort: 8000 image: pullPolicy: Always pullSecrets: - ... - rhdh-secret-registry ... Click Upgrade . Edit your custom Red Hat Developer Hub config map, for example app-config-rhdh . Remove the ansible section. data: app-config-rhdh.yaml: | ... ansible: analytics: enabled: true devSpaces: baseUrl: '<https://MyOwnDevSpacesUrl/>' creatorService: baseUrl: '127.0.0.1' port: '8000' rhaap: baseUrl: '<https://MyAapSubcriptionUrl>' token: '<TopSecretAAPToken>' checkSSL: true automationHub: baseUrl: '<https://MyOwnPAHUrl/>' Restart the Red Hat Developer Hub deployment. Remove the plugin-registry OpenShift application. oc delete all -l app=plugin-registry
[ "global: plugins: - disabled: false integrity: <SHA512 value> package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-x.y.z.tgz' pluginConfig: dynamicPlugins: frontend: ansible.plugin-backstage-rhaap: appIcons: - importName: AnsibleLogo name: AnsibleLogo dynamicRoutes: - importName: AnsiblePage menuItem: icon: AnsibleLogo text: Ansible path: /ansible - disabled: false integrity: <SHA512 value> package: >- http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-scaffolder-backend-module-backstage-rhaap: null - disabled: false integrity: <SHA512 value> package: >- http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-backstage-rhaap-backend: null", "upstream: backstage: | extraContainers: - command: - adt - server image: >- registry.redhat.io/ansible-automation-platform-25/ansible-dev-tools-rhel8:latest imagePullPolicy: IfNotPresent name: ansible-devtools-server ports: - containerPort: 8000 image: pullPolicy: Always pullSecrets: - - rhdh-secret-registry", "data: app-config-rhdh.yaml: | ansible: analytics: enabled: true devSpaces: baseUrl: '<https://MyOwnDevSpacesUrl/>' creatorService: baseUrl: '127.0.0.1' port: '8000' rhaap: baseUrl: '<https://MyAapSubcriptionUrl>' token: '<TopSecretAAPToken>' checkSSL: true automationHub: baseUrl: '<https://MyOwnPAHUrl/>'", "delete all -l app=plugin-registry" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_ansible_plug-ins_for_red_hat_developer_hub/rhdh-uninstall-ocp-helm_aap-plugin-rhdh-installing
3.2. Packages Required to Install a Client
3.2. Packages Required to Install a Client Install the ipa-client package: The ipa-client package automatically installs other required packages as dependencies, such as the System Security Services Daemon (SSSD) packages.
[ "yum install ipa-client" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/client-automatic-required-packages
Chapter 8. Sources
Chapter 8. Sources The updated Red Hat Ceph Storage source code packages are available at the following location: For Red Hat Enterprise Linux 8: http://ftp.redhat.com/redhat/linux/enterprise/8Base/en/RHCEPH/SRPMS/ For Red Hat Enterprise Linux 9: http://ftp.redhat.com/redhat/linux/enterprise/9Base/en/RHCEPH/SRPMS/
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/5.2_release_notes/sources
Chapter 4. Server Configuration Changes
Chapter 4. Server Configuration Changes 4.1. RPM Installation Changes In JBoss EAP 6, the default path for the RPM installation was the /usr/share/jbossas/ directory. JBoss EAP 7 was built to Using Red Hat Software Collections conventions. The root directory of Software Collections is normally located in the /opt/ directory to avoid possible conflicts between Software Collections and the base system installation. The use of the /opt/ directory is recommended by the Filesystem Hierarchy Standard (FHS). As a result, the default path for the RPM installation has changed to /opt/rh/eap7/root/usr/share/wildfly/ in JBoss EAP 7. 4.2. Server Configuration Migration Options To migrate your server configuration from JBoss EAP 6 to JBoss EAP 7, you can either use the JBoss Server Migration Tool or you can perform a manual migration with the help of the management CLI migrate operation . JBoss Server Migration Tool The JBoss Server Migration Tool is the preferred method to update your configuration to include the new features and settings in JBoss EAP 7 while keeping your existing configuration. For information about how to configure and run the tool, see Using the JBoss Server Migration Tool . Management CLI Migrate Operation You can use the management CLI migrate operation to update the jacorb , messaging , and web subsystems in the JBoss EAP 6 server configuration file to allow them run on the new release, but be aware that the result is not a complete JBoss EAP 7 configuration. For example: The operation does not update the original remote protocol and port settings to the new http-remoting and new port settings now used in JBoss EAP 7. The configuration does not include the new JBoss EAP subsystems or features such as clustered singleton deployments, or graceful shutdown. The configuration does not include the Jakarta EE 8 features such as batch processing. The migrate operation does not migrate the ejb3 subsystem configuration. For information about possible Jakarta Enterprise Beans migration issues, see Jakarta Enterprise Beans Server Configuration Changes . For more information about using the migrate operation to migration the server configuration, see Management CLI Migration Operation . 4.3. Management CLI Migration Operation You can use the management CLI to update your JBoss EAP 6 server configuration files to run on JBoss EAP 7. The management CLI provides a migrate operation to automatically update the jacorb , messaging , and web subsystems from the release to the new configuration. You can also execute the describe-migration operation for the jacorb , messaging , and web subsystems to review the proposed migration configuration changes before you perform the migration. There are no replacements for the cmp , jaxr , or threads subsystems and they must be removed from the server configuration. Important See Server Configuration Migration Options for limitations of the migrate operation. The JBoss Server Migration Tool is the preferred method to update your configuration to include the new features and settings in JBoss EAP 7 while keeping your existing configuration. For information about how to configure and run the tool, see Using the JBoss Server Migration Tool . Table 4.1. Subsystem Migration and Management CLI Operation JBoss EAP 6 Subsystem JBoss EAP 7 Subsystem Management CLI Operation cmp no replacement remove jacorb iiop-openjdk migrate jaxr no replacement remove messaging messaging-activemq migrate threads no replacement remove web undertow migrate Start the Server and the Management CLI Follow the steps below to update your JBoss EAP 6 server configuration to run on JBoss EAP 7. Before you begin, review Back Up Important Data and Review Server State . It contains important information about making sure the server is in a good state and the appropriate files are backed up. Start the JBoss EAP 7 server with the JBoss EAP 6 configuration. Back up the JBoss EAP 7 server configuration files. Copy the configuration file from the release into the JBoss EAP 7 directory. Navigate to the JBoss EAP 7 install directory and start the server with the --start-mode=admin-only argument. Note You will see the following org.jboss.as.controller.management-operation ERRORS in the server log when you start the server. These errors are expected and indicate that the legacy subsystem configurations must be removed or migrated to JBoss EAP 7. WFLYCTL0402: Subsystems [cmp] provided by legacy extension 'org.jboss.as.cmp' are not supported on servers running this version. Both the subsystem and the extension must be removed or migrated before the server will function. WFLYCTL0402: Subsystems [jacorb] provided by legacy extension 'org.jboss.as.jacorb' are not supported on servers running this version. Both the subsystem and the extension must be removed or migrated before the server will function. WFLYCTL0402: Subsystems [jaxr] provided by legacy extension 'org.jboss.as.jaxr' are not supported on servers running this version. Both the subsystem and the extension must be removed or migrated before the server will function. WFLYCTL0402: Subsystems [messaging] provided by legacy extension 'org.jboss.as.messaging' are not supported on servers running this version. Both the subsystem and the extension must be removed or migrated before the server will function. WFLYCTL0402: Subsystems [threads] provided by legacy extension 'org.jboss.as.threads' are not supported on servers running this version. Both the subsystem and the extension must be removed or migrated before the server will function. WFLYCTL0402: Subsystems [web] provided by legacy extension 'org.jboss.as.web' are not supported on servers running this version. Both the subsystem and the extension must be removed or migrated before the server will function. Open a new terminal, navigate to the JBoss EAP 7 install directory, and start the management CLI using the --controller=remote://localhost:9990 arguments. Migrate the JacORB, Messaging, and Web Subsystems To review the configuration changes that will be made to the subsystem before you perform the migration, execute the describe-migration operation. The describe-migration operation uses the following syntax. The following example describes the configuration changes that are made to the JBoss EAP 6.4 standalone-full.xml configuration file when it is migrated to JBoss EAP 7. Entries were removed from the output to improve readability and to save space. Example: describe-migration Operation Execute the migrate operation to migrate the subsystem configuration to the subsystem that replaces it in JBoss EAP 7. The operation uses the following syntax. Note The messaging subsystem describe-migration and migrate operations allow you to pass an argument to configure access by legacy clients. For more information about the command syntax, see Messaging Subsystem Migration and Forward Compatibility . Review the outcome and result of the command. Be sure the operation completed successfully and there are no "migration-warning" entries. This means the migration configuration for the subsystem is complete. Example: Successful migrate Operation with No Warnings If you see "migration-warnings" entries in the log, this indicates the migration of the server configuration completed successfully but it was not able to migrate all of elements and attributes. You must follow the suggestions provided by the "migration-warnings" and run additional management CLI commands to modify those configurations. The following is an example of a migrate operation that returns "migration-warnings". Example: migrate Operation with Warnings Note The list of migrate and describe-migration warnings for each subsystem is located in the Reference Material at the end of this guide. Jacorb Subsystem Migration Operation Warnings Messaging Subsystem Migration Operation Warnings Web Subsystem Migration Operation Warnings Review the server configuration file to verify the extension, subsystem, and namespace were updated and the existing subsystem configuration was migrated to JBoss EAP 7. Note You must repeat this process for each of the jacorb , messaging , and web subsystems using the following commands. Remove the cmp , jaxr , and threads subsystems and extensions from the server configuration. While still in the management CLI prompt, remove the obsolete cmp , jaxr , and threads subsystems by executing the following commands. Important You must migrate the messaging , jacorb , and web subsystems and remove the cmp , jaxr , and threads extensions and subsystems before you can restart the server for normal operation. If you need to restart the server before you complete this process, be sure to include the --start-mode=admin-only argument on the server start command line. This allows you to continue with the configuration changes. 4.4. Logging Changes 4.4.1. Logging Message Prefix Changes Log messages are prefixed with the project code for the subsystem that reports the message. The prefixes for all log messages have changed in JBoss EAP 7. For a complete list of the new log message project code prefixes used in JBoss EAP 7, see Project Codes Used in JBoss EAP in the JBoss EAP Development Guide . 4.4.2. Root Logger Console Handler Changes The JBoss EAP 7.0 root logger included a console log handler for all domain server profiles and for all default standalone profiles except the standalone-full-ha profile. As of JBoss EAP 7.1, the root logger no longer includes a console log handler for the managed domain profiles. The host controller and process controller log to the console by default. To achieve the same functionality that was provided in JBoss EAP 7.0, see Configure a Console Log Handler in the Configuration Guide for JBoss EAP. 4.5. Web Server Configuration Changes 4.5.1. Replace the Web Subsystem with Undertow Undertow replaces JBoss Web as the web server in JBoss EAP 7. This means the legacy web subsystem configuration must be migrated to the new JBoss EAP 7 undertow subsystem configuration. The urn:jboss:domain:web:2.2 subsystem configuration namespace in the server configuration file has been replaced by the urn:jboss:domain:undertow:10.0 namespace. The org.jboss.as.web extension module, located in EAP_HOME /modules/system/layers/base/ , has been replaced with the org.wildfly.extension.undertow extension module. You can use the management CLI migrate operation to migrate the web subsystem to undertow in the server configuration file. However, be aware that this operation is not able to migrate all JBoss Web subsystem configurations. If you see "migration-warning" entries, you must run additional management CLI commands to migrate those configurations to Undertow. For more information about the management CLI migrate operation, see Management CLI Migration Operation . The following is an example of the default web subsystem configuration in JBoss EAP 6.4. <subsystem xmlns="urn:jboss:domain:web:2.2" default-virtual-server="default-host" native="false"> <connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/> <virtual-server name="default-host" enable-welcome-root="true"> <alias name="localhost"/> <alias name="example.com"/> </virtual-server> </subsystem> The following is an example of the default undertow subsystem configuration in JBoss EAP 7.4. <subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <http-invoker security-realm="ApplicationRealm"/> </host> </server> ... </subsystem> 4.5.2. Migrate JBoss Web Rewrite Conditions The management CLI migrate operation is not able to automatically migrate rewrite conditions. They are reported as "migration-warnings", and you must migrate them manually. You can create the equivalent configuration in JBoss EAP 7 by using Undertow Predicates Attributes and Handlers . The following is an example of a web subsystem configuration in JBoss EAP 6 that includes rewrite configuration. <subsystem xmlns="urn:jboss:domain:web:2.2" default-virtual-server="default" native="false"> <virtual-server name="default" enable-welcome-root="true"> <alias name="localhost"/> <rewrite name="test" pattern="(.*)/toberewritten/(.*)" substitution="USD1/rewritten/USD2" flags="NC"/> <rewrite name="test2" pattern="(.*)" substitution="-" flags="F"> <condition name="get" test="%{REQUEST_METHOD}" pattern="GET"/> <condition name="andCond" test="%{REQUEST_URI}" pattern=".*index.html" flags="NC"/> </rewrite> </virtual-server> </subsystem> Follow the Management CLI Migration Operation instructions to start your server and the management CLI, then migrate the web subsystem configuration file using the following command. The following "migration-warnings" are reported when you run the migrate operation on the above configuration. Review the server configuration file and you see the following configuration for the undertow subsystem. Note The rewrite configuration is dropped. <subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> <server name="default-server"> <http-listener name="http" socket-binding="http"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost, example.com"> <location name="/" handler="welcome-content"/> </host> </server> <servlet-container name="default"> <jsp-config/> </servlet-container> <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> </subsystem> Use the management CLI to create the filter to replace the rewrite configuration in the undertow subsystem. You should see "{"outcome" ⇒ "success"}" for each command. Review the updated server configuration file. The JBoss Web subsystem is now completely migrated and configured in the undertow subsystem. <subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> <server name="default-server"> <http-listener name="http" socket-binding="http"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost, example.com"> <location name="/" handler="welcome-content"/> <filter-ref name="test1"/> <filter-ref name="test2"/> </host> </server> <servlet-container name="default"> <jsp-config/> </servlet-container> <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> <filters> <expression-filter name="test1" expression="path('(.*)/toberewritten/(.*)') -> rewrite('USD1/rewritten/USD2')"/> <expression-filter name="test2" expression="method('GET') and path('.*index.html') -> response-code(403)"/> </filters> </subsystem> For more information about how to configure filters and handlers using the management CLI, see Configuring the Web Server in the JBoss EAP 7 Configuration Guide . 4.5.3. Migrate JBoss Web System Properties In the release of JBoss EAP, system properties could be used to modify the default JBoss Web behavior. For information about how to configure the same behavior in Undertow, see JBoss Web System Properties Migration Reference 4.5.4. Update the Access Log Header Pattern When you migrate from JBoss EAP 6.4 to JBoss EAP 7, you might find that the access logs no longer write the expected "Referer" and "User-agent" values. This is because JBoss Web, which was included in JBoss EAP 6.4, used a pattern of %{headername}i in the access-log to log an incoming header. Example: Access Log Format in JBoss EAP 6.4 <access-log pattern="%h %l %u %t &quot;%T sec&quot; &quot;%r&quot; %s %b &quot;%{Referer}i&quot; &quot;%{User-agent}i&quot;"/> With the change to use Undertow in JBoss EAP 7, the pattern for an incoming header has changed to %{i,headername} . Example: Access Format Header in JBoss EAP 7 <access-log pattern="%h %l %u %t &quot;%T sec&quot; &quot;%r&quot; %s %b &quot;%{i,Referer}&quot; &quot;%{i,User-Agent}&quot;"/> 4.5.5. Migrate Global Valves releases of JBoss EAP supported valves. Valves are custom classes inserted into the request processing pipeline for an application before servlet filters to make changes to the request or perform additional processing. Global valves are inserted into the request processing pipeline of all deployed applications and are configured in the server configuration file. Authenticator valves authenticate the credentials of the request. Custom application valves were created by extending the org.apache.catalina.valves.ValveBase class and configured in the <valve> element of the jboss-web.xml descriptor file. These valves must be migrated manually. This section describes how to migrate global valves. Migration of custom and authenticator valves are covered in the Migrate Custom Application Valves section of this guide. Undertow, which replaces JBoss Web in JBoss EAP 7, does not support global valves; however, you should be able to achieve similar functionality by using Undertow handlers. Undertow includes a number of built-in handlers that provide common functionality. It also provides the ability to create custom handlers, which can be used to replace custom valve functionality. If your application uses valves, you must replace them with the appropriate Undertow handler code to achieve the same functionality when you migrate to JBoss EAP 7. For more information about how to configure handlers, see Configuring Handlers in the JBoss EAP 7 Configuration Guide . For more information about how to configure filters, see Configuring Filters in the JBoss EAP 7 Configuration Guide . Migrate JBoss Web Valves The following table lists the valves that were provided by JBoss Web in the release of JBoss EAP and the corresponding Undertow built-in handler. The JBoss Web valves are located in the org.apache.catalina.valves package. Table 4.2. Mapping Valves to Handlers Valve Handler AccessLogValve io.undertow.server.handlers.accesslog.AccessLogHandler CrawlerSessionManagerValve io.undertow.servlet.handlers.CrawlerSessionManagerHandler ExtendedAccessLogValve io.undertow.server.handlers.accesslog.AccessLogHandler JDBCAccessLogValve See the JDBCAccessLogValve Manual Migration Procedure below for instructions. RemoteAddrValve io.undertow.server.handlers.IPAddressAccessControlHandler RemoteHostValve io.undertow.server.handlers.AccessControlListHandler RemoteIpValve io.undertow.server.handlers.ProxyPeerAddressHandler RequestDumperValve io.undertow.server.handlers.RequestDumpingHandler RewriteValve See Migrate JBoss Web Rewrite Conditions for instructions to migrate these valves manually. StuckThreadDetectionValve io.undertow.server.handlers.StuckThreadDetectionHandler You can use the management CLI migrate operation to automatically migrate global valves that meet the following criteria: They are limited to the valves listed in the table that do not require manual processing. They must be defined in the web subsystem of the server configuration file. For more information about the management CLI migrate operation, see Management CLI Migration Operation . JDBCAccessLogValve Manual Migration Procedure The org.apache.catalina.valves.JDBCAccessLogValve valve is an exception to the rule and can not be automatically migrated to io.undertow.server.handlers.JDBCLogHandler . Follow the steps below to migrate the following example valve. <valve name="jdbc" module="org.jboss.as.web" class-name="org.apache.catalina.valves.JDBCAccessLogValve"> <param param-name="driverName" param-value="com.mysql.jdbc.Driver" /> <param param-name="connectionName" param-value="root" /> <param param-name="connectionPassword" param-value="password" /> <param param-name="connectionURL" param-value="jdbc:mysql://localhost:3306/wildfly?zeroDateTimeBehavior=convertToNull" /> <param param-name="format" param-value="combined" /> </valve> Create a driver module for the database that will store the log entries. Configure the datasource for the database and add the driver to the list of available drivers in the datasources subsystem. <datasources> <datasource jndi-name="java:jboss/datasources/accessLogDS" pool-name="accessLogDS" enabled="true" use-java-context="true"> <connection-url>jdbc:mysql://localhost:3306/wildfly?zeroDateTimeBehavior=convertToNull</connection-url> <driver>mysql</driver> <security> <user-name>root</user-name> <password>Password1!</password> </security> </datasource> ... <drivers> <driver name="mysql" module="com.mysql"> <driver-class>com.mysql.jdbc.Driver</driver-class> </driver> ... </drivers> </datasources> Configure an expression-filter in the undertow subsystem with the following expression: jdbc-access-log(datasource= DATASOURCE_JNDI_NAME ) . <filters> <expression-filter name="jdbc-access" expression="jdbc-access-log(datasource='java:jboss/datasources/accessLogDS')" /> ... </filters> 4.5.6. Changes to Set-Cookie Behavior specifications for Set-Cookie HTTP response header syntax, for example RFC2109 and RFC2965, allowed white space and other separator characters in the cookie value when the cookie value was quoted. JBoss Web in JBoss EAP 6.4 conformed to the specifications and automatically quoted a cookie value when it contained any separator characters. The RFC6265 specification for Set-Cookie HTTP response header syntax states that cookie values in the Set-Cookie response header must conform to specific grammar constraints. For example, they must be US-ASCII characters, but they cannot include CTRLs (controls), whitespace, double quotes, commas, semicolons, or backslash characters. In JBoss EAP 7.0, prior to cumulative patch Red Hat JBoss Enterprise Application Platform 7.0 Update 08 , Undertow does not restrict these invalid characters and does not quote cookies that contained the excluded characters. If you apply this cumulative patch or a newer cumulative patch you can enable RFC6265 compliant cookie validation by setting the io.undertow.cookie.DEFAULT_ENABLE_RFC6265_COOKIE_VALIDATION system property to true . Starting in JBoss EAP 7.1, by default, Undertow does not enable RFC6265 compliant cookie validation. It does quote cookies that contain the excluded characters. Starting in JBoss EAP 7.1, you cannot use the io.undertow.cookie.DEFAULT_ENABLE_RFC6265_COOKIE_VALIDATION system property to enable RFC6265 compliant cookie validation. Instead, you enable RFC6265 compliant cookie validation for an HTTP, HTTPS, or AJP listener by setting the rfc6265-cookie-validation listener attribute to true . The default value for this attribute is false . The following example enables RFC6265 compliant cookie validation for the HTTP listener. 4.5.7. Changes to HTTP Method Call Behavior JBoss EAP 6.4, which included JBoss Web as the web server, allowed HTTP TRACE method calls by default. Undertow, which replaces JBoss Web as the web server in JBoss EAP 7, disallows HTTP TRACE method calls by default. This setting is configured using the disallowed-methods attribute of the http-listener element in the undertow subsystem. This can be confirmed by reviewing the output from the following read-resource command. Note that the value for the disallowed-methods attribute is ["TRACE"] . To enable HTTP TRACE method calls in JBoss EAP 7 and later, you must remove the "TRACE" entry from the disallowed-methods attribute list by running the following command. When you run the read-resource command again, you will notice the TRACE method call is no longer in the list of disallowed methods. For more information about the default behavior of HTTP methods, see Default Behavior of HTTP Methods in the JBoss EAP Configuration Guide . 4.5.8. Changes in the Default Web Module Behavior In JBoss EAP 7.0, the root context of a web application was disabled by default in mod_cluster. As of JBoss EAP 7.1, this is no longer the case. This can have unexpected consequences if you are expecting the root context to be disabled. For example, requests can be misrouted to undesired nodes or a private application that should not be exposed can be inadvertently accessible through a public proxy. Undertow locations are also now registered with the mod_cluster load balancer automatically unless they are explicitly excluded. Use the following management CLI command to exclude ROOT from the modcluster subsystem configuration. Use the following management CLI command to disable the default welcome web application. For more information about how to configure the default welcome web application, see Configure the Default Welcome Web Application in the Development Guide for JBoss EAP. 4.5.9. Changes in the Undertow Subsystem Default Configuration Prior to JBoss EAP 7.2, the default undertow subsystem configuration included two response header filters that were appended to each HTTP response by the default-host . Server , which was set to JBoss-EAP/7 . X-Powered-By , which was set to Undertow/1 . These response header filters were removed from the default JBoss EAP 7.2 configuration to prevent inadvertent disclosure of information about the server in use. The following is an example of the default undertow subsystem configuration in JBoss EAP 7.1. <subsystem xmlns="urn:jboss:domain:undertow:4.0"> <buffer-cache name="default"/> <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <filter-ref name="server-header"/> <filter-ref name="x-powered-by-header"/> <http-invoker security-realm="ApplicationRealm"/> </host> </server> <servlet-container name="default"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> <filters> <response-header name="server-header" header-name="Server" header-value="JBoss-EAP/7"/> <response-header name="x-powered-by-header" header-name="X-Powered-By" header-value="Undertow/1"/> </filters> </subsystem> The following is an example of the new default undertow subsystem configuration in JBoss EAP 7.4. <subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <http-invoker security-realm="ApplicationRealm"/> </host> </server> <servlet-container name="default"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> </subsystem> 4.6. JGroups Server Configuration Changes 4.6.1. JGroups Defaults to a Private Network Interface In the JBoss EAP 6 default configuration, JGroups used the public interface defined in the <interfaces> section of the server configuration file. Because it is a recommended practice to use a dedicated network interface, JGroups now defaults to using the new private interface that is defined in the <interfaces> section of the server configuration file in JBoss EAP 7. 4.6.2. JGroups Channels Changes JGroups provides group communication support for HA services in the form of JGroups channels. JBoss EAP 7 introduces <channel> elements to the jgroups subsystem in the server configuration file. You can add, remove, or change the configuration of JGroups channels using the management CLI. For more information about how to configure JGroups, see Cluster Communication with JGroups in the JBoss EAP Configuration Guide . 4.7. Infinispan Server Configuration Changes 4.7.1. Infinispan Default Cache Configuration Changes In JBoss EAP 6, the default clustered caches for web session replication and EJB replication were replicated ASYNC caches. This has changed in JBoss EAP 7. The default clustered caches are now distributed ASYNC caches. The replicated caches are no longer even configured by default. See Configure the Cache Mode in the JBoss EAP Configuration Guide for information about how to add a replicated cache and make it the default. This only affects you when you use the new JBoss EAP 7 default configuration. If you migrate the configuration from JBoss EAP 6, the configuration of the infinispan subsystem will be preserved. 4.7.2. Infinispan Cache Strategy Changes The behavior of ASYNC cache strategy has changed in JBoss EAP 7. In JBoss EAP 6, ASYNC cache reads were lock free. Although they would never block, the were prone to dirty reads of stale data, for example on failover. This is because it would allow subsequent requests for the same user to start before the request completed. This permissiveness is not acceptable when using distributed mode, since cluster topology changes can affect session affinity and easily result in stale data. In JBoss EAP 7, ASYNC cache reads require locks. Since they now block new requests from the same user until the replication finishes, dirty reads are prevented. 4.7.3. Configuring Custom Stateful Session Bean Cache for Passivation Be aware of the following restrictions when configuring a custom stateful session bean (SFSB) cache for passivation in JBoss EAP 7.1 and later. The idle-timeout attribute, which is configured in the infinispan passivation-store of the ejb3 subsystem, is deprecated in JBoss EAP 7.1 and later. JBoss EAP 6.4 supported eager passivation, passivating according to the idle-timeout value. JBoss EAP 7.1 and later support lazy passivation, passivating when the max-size threshold is reached. In JBoss EAP 7.1 and later, the cluster name used by the Jakarta Enterprise Beans client is determined by the actual cluster name of the channel, as configured in the jgroups subsystem. JBoss EAP 7.1 and later still allow you to set the max-size attribute to control the passivation threshold. You should not configure eviction or expiration in your Jakarta Enterprise Beans cache configuration. You should configure eviction by using the max-size attribute of the passivation-store in the ejb3 subsystem. You should configure expiration by using the @StatefulTimeout annotation in the SFSB Java source code or by specifying a stateful-timeout value in the ejb-jar.xml file. 4.7.4. Infinispan Cache Container Transport Changes A change in behavior between JBoss EAP 7.0 and later versions requires that any updates to the cache container transport protocol to be done in batch mode or using a special header. This change in behavior also impacts any tools that are used to manage the JBoss EAP server. The following is an example of the management CLI commands used to configure the cache container transport protocol in JBoss EAP 7.0. The following is an example of the management CLI commands needed to perform the same configuration in JBoss EAP 7.1. Note that the commands are executed in batch mode. If you prefer not to use batch mode, you can instead specify the operation header allow-resource-service-restart=true when defining the transport. Be aware that this restarts the service so that the operations can take effect, and some services might stop working until the service is restarted. If you use scripts to update the cache container transport protocol, be sure to review them and add batch mode. 4.8. Jakarta Enterprise Beans Server Configuration Changes There is no migrate operation for the ejb3 subsystem, so if you use the management CLI migrate operations to upgrade your other existing JBoss EAP 6.4 configurations, be aware that the ejb3 subsystem configuration is not migrated. Because the configuration of the ejb3 subsystem is slightly different in JBoss EAP 7 than in JBoss EAP 6.4, you might see exceptions in the server log when you deploy your enterprise bean applications. Important If you use the JBoss Server Migration Tool to update your server configuration, the ejb3 subsystem should be configured correctly and you should not see any issues when you deploy your Jakarta Enterprise Beans applications. For information about how to configure and run the tool, see Using the JBoss Server Migration Tool . 4.8.1. DuplicateServiceException The following DuplicateServiceException is caused by caching changes in JBoss EAP 7. DuplicateServiceException in Server Log You must reconfigure the cache to resolve this error. Follow the instructions to Start the Server and the Management CLI . Issue the following commands to reconfigure caching in the ejb3 subsystem. 4.8.2. Jakarta Enterprise Beans subsystem server configuration changes Before JBoss EAP 7.4, the connector-ref attribute of the remote element in the ejb3 subsystem was used to specify a single remoting connector. External Jakarta Enterprise Beans clients would then use the specified remoting connector to connect to the server. JBoss EAP 7.4 replaces the connector-ref attribute with the connectors attribute. The connectors attribute takes a list of connectors from the remoting subsystem, so that external Jakarta Enterprise Beans clients can use them to connect to the server. Additional resources Jakarta Enterprise Beans client remoting interoperability 4.9. Messaging Server Configuration Changes In JBoss EAP 7, ActiveMQ Artemis replaces HornetQ as the Jakarta Messaging support provider. This section describes how to migrate the configuration and related messaging data. 4.9.1. Messaging Subsystem Server Configuration Changes The org.jboss.as.messaging module extension, located in EAP_HOME /modules/system/layers/base/ , has been replaced by the org.wildfly.extension.messaging-activemq extension module. The urn:jboss:domain:messaging:3.0 subsystem configuration namespace has been replaced by the urn:jboss:domain:messaging-activemq:4.0 namespace. Management Model In most cases, an effort was made to keep the element and attribute names as similar as possible to those used in releases. The following table lists some of the changes. Table 4.3. Mapping Messaging Attributes HornetQ Name ActiveMQ Name \ufeff hornetq-server server hornetq-serverType serverType connectors connector discovery-group-name discovery-group The management operations invoked on the new messaging-activemq subsystem have changed from /subsystem=messaging/hornetq-server= to /subsystem=messaging-activemq/server= . You can migrate an existing JBoss EAP 6 messaging subsystem configuration to the messaging-activemq subsystem on a JBoss EAP 7 server by invoking its migrate operation. Before you execute the migrate operation, you can invoke the describe-migration operation to review the list of management operations that will be performed to migrate from the existing JBoss EAP 6 messaging subsystem configuration to the messaging-activemq subsystem on the JBoss EAP 7 server. The migrate and describe-migration operations also display a list of migration-warnings for resources or attributes that can not be migrated automatically. Messaging Subsystem Migration and Forward Compatibility The describe-migration and migrate operations for the messaging subsystem provide an additional configuration argument. If you want to configure messaging to allow legacy JBoss EAP 6 clients to connect to the JBoss EAP 7 server, you can add the boolean add-legacy-entries argument to the describe-migration or migrate operation as follows. If the boolean argument add-legacy-entries is set to true , the messaging-activemq subsystem creates the legacy-connection-factory resource and adds legacy-entries to the jms-queue and jms-topic resources. If the boolean argument add-legacy-entries is set to false , no legacy resources are created in the messaging-activemq subsystem and legacy messaging clients will not be able to communicate with the JBoss EAP 7 servers. This is the default value. For more information about forward and backward compatibility see the Backward and Forward Compatibility in Configuring Messaging for JBoss EAP. For more information about the management CLI migrate and describe-migration operations, see Management CLI Migration Operation . Change in Behavior of forward-when-no-consumers Attribute The behavior of the forward-when-no-consumers attribute has changed in JBoss EAP 7. In JBoss EAP 6, when forward-when-no-consumers was set to false and there were no consumers in a cluster, messages were redistributed to all nodes in a cluster. This behavior has changed in JBoss EAP 7. When forward-when-no-consumers is set to false and there are no consumers in a cluster, messages are not redistributed. Instead, they are kept on the original node to which they were sent. Change in Default Cluster Load Balancing Policy The default cluster load balancing policy has changed in JBoss EAP 7. In JBoss EAP 6, the default cluster load balancing policy was similar to STRICT , which is like setting the legacy forward-when-no-consumers parameter to true . In JBoss EAP 7, the default is now ON_DEMAND , which is like setting the legacy forward-when-no-consumers parameter to false . For more information about these settings, see Cluster Connection Attributes in Configuring Messaging for JBoss EAP. Messaging Subsystem XML Configuration The XML configuration has changed significantly with the new messaging-activemq subsystem, and now provides an XML scheme more consistent with other JBoss EAP subsystems. It is strongly advised that you do not attempt to modify the JBoss EAP messaging subsystem XML configuration to conform to the new messaging-activemq subsystem. Instead, invoke the legacy subsystem migrate operation. This operation will write the XML configuration of the new messaging-activemq subsystem as a part of its execution. 4.9.2. Migrate Messaging Data You can use one of the following approaches to migrate messaging data from a release to the current release of JBoss EAP. For file-based messaging systems, you can migrate messaging data to JBoss EAP 7.4 from JBoss EAP 6.4 and JBoss EAP 7.x releases using the export and import method . With this method you export the messaging data from the release and import it using the management CLI import-journal operation. Be aware that you can use this approach for file-based messaging systems only. You can migrate messaging data from JBoss EAP 6.4 to JBoss EAP 7.4 by configuring a Jakarta Messaging bridge . You can use this approach for both file-based and JDBC messaging systems. Due to the change from HornetQ to ActiveMQ Artemis as the Jakarta Messaging support provider, both the format and the location of the messaging data changed in JBoss EAP 7.0 and later. See Mapping Messaging Folder Names for details of the changes to the messaging data folder names and locations between the 6.4 and 7.x releases. 4.9.2.1. Migrate Messaging Data Using Export and Import Using this approach, you export the messaging data from a release to an XML file, and then import that file using the import-journal operation. Export the messaging data to an XML file. Export messaging data from JBoss EAP 6.4. Export messaging data from JBoss EAP 7.x. Import the XML formatted messaging data. Important You cannot use the export and import method to move messaging data between systems that use a JDBC-based journal for storage. Export Messaging Data from JBoss EAP 6.4 Due to the change from HornetQ to ActiveMQ Artemis as the messaging support provider, both the format and the location of the messaging data changed in JBoss EAP 7.0 and later. To export messaging data from JBoss EAP 6.4, you must use the HornetQ exporter utility. The HornetQ exporter utility generates and exports the messaging data from JBoss EAP 6.4 to an XML file. This command requires that you specify the paths to the required HornetQ JARs that shipped with JBoss EAP 6.4, pass the paths to messagingbindings/ , messagingjournal/ , messagingpaging/ , and messaginglargemessages/ folders from the release as arguments, and specify an output file in which to write the exported XML data. The following is the syntax required by the HornetQ exporter utility. USD java -jar -mp MODULE_PATH org.hornetq.exporter MESSAGING_BINDINGS_DIRECTORY MESSAGING_JOURNAL_DIRECTORY MESSAGING_PAGING_DIRECTORY MESSAGING_LARGE_MESSAGES_DIRECTORY > OUTPUT_DATA .xml Create a custom module to ensure the correct versions of the HornetQ JARs, including any JARs installed with patches or upgrades, are loaded and made available to the exporter utility. Using your favorite editor, create a new module.xml file in the EAP6_HOME /modules/org/hornetq/exporter/main/ directory and copy the following content: <?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.1" name="org.hornetq.exporter"> <main-class name="org.hornetq.jms.persistence.impl.journal.XmlDataExporter"/> <properties> <property name="jboss.api" value="deprecated"/> </properties> <dependencies> <module name="org.hornetq"/> </dependencies> </module> Note The custom module is created in the modules/ directory, not the modules/system/layers/base/ directory. Follow the steps below to export the data. Stop the JBoss EAP 6.4 server. Create the custom module as described above. Run the following command to export the data. Make sure there are no errors or warning messages in the log at the completion of the command. Use tooling available for your operating system to validate the XML in the generated output file. Export Messaging Data from JBoss EAP 7.x Follow these steps to export messaging data from JBoss EAP 7.x. Open a terminal, navigate to the JBoss EAP 7.x install directory, and start the server in admin-only mode. Open a new terminal, navigate to the JBoss EAP 7.x install directory, and connect to the management CLI. Use the following management CLI command to export the messaging journal data. Make sure there are no errors or warning messages in the log at the completion of the command. Use tooling available for your operating system to validate the XML in the generated output file. Import the XML Formatted Messaging Data You then import the XML file into JBoss EAP 7.0 or later by using the import-journal operation as follows. Important If your target server has already performed some messaging tasks, be sure to back up your messaging folders before you begin the import-journal operation to prevent data loss in the event of an import failure. See Backing Up Messaging Folder Data for more information. If you are migrating your JBoss EAP 6.4 server to JBoss EAP 7.4, make sure you have completed the migration of the server configuration before you begin by using the management CLI migrate operation or by running the JBoss Server Migration Tool. For information about how to configure and run the tool, see Using the JBoss Server Migration Tool . Start the JBoss EAP 7.x server in normal mode with no Jakarta Messaging clients connected. Important It is important that you start the server with no Jakarta Messaging clients connected. This is because the import-journal operation behaves like a Jakarta Messaging producer. Messages are immediately available when the operation is in progress. If this operation fails in the middle of the import and Jakarta Messaging clients are connected, there is no way to recover because Jakarta Messaging clients might have already consumed some of the messages. Open a new terminal, navigate to the JBoss EAP 7.x install directory, and connect to the management CLI. Use the following management CLI command to import the messaging data. Important Do not run this command more than one time as doing so will result in duplicate messages! Warning If you are using JBoss EAP 7.0, you must apply Red Hat JBoss Enterprise Application Platform 7.0 Update 05 or a newer cumulative patch to your JBoss EAP installation in order to avoid a known issue when reading large messages. For more information, see JBEAP-4407 - Consumer crashes with IndexOutOfBoundsException when reading large messages from imported journal . This issue does not affect JBoss EAP 7.1 and later. Recovering from an Import Messaging Data Failure If the import-journal operation fails, you can attempt to recover by using the following steps. Shut down the JBoss EAP 7.x server. Delete all of the messaging journal folders. See Backing Up Messaging Folder Data for the management CLI commands to determine the correct directory location for the messaging journal folders. If you backed up the target server messaging data prior to the import, copy the messaging folders from the backup location to the messaging journal directory determined in the prior step. Repeat the steps to import the XML formatted messaging data . 4.9.2.2. Migrate Messaging Data Using a Messaging Bridge Using this approach, you configure and deploy a messaging bridge to the JBoss EAP 7.x server. This bridge moves messages from the JBoss EAP 6.4 HornetQ queue to the JBoss EAP 7.x ActiveMQ Artemis queue. A Jakarta Messaging bridge consumes messages from a source Jakarta Messaging queue or topic and sends them to a target Jakarta Messaging queue or topic, which is typically on a different server. It can be used to bridge messages between any messaging servers, as long as they are JMS 1.1 compliant. The source and destination Jakarta Messaging resources are looked up using Java Naming and Directory Interface, and the client classes for the Java Naming and Directory Interface lookup must be bundled in a module. The module name is then declared in the Jakarta Messaging bridge configuration. This section describes how to configure the servers and deploy a messaging bridge to move the messaging data from JBoss EAP 6.4 to JBoss EAP 7.x. Configure the source JBoss EAP 6.4 server. Configure the target JBoss EAP 7.x server. Migrate the messaging data. Configure the Source JBoss EAP 6.4 Server Stop the JBoss EAP 6.4 server. Back up the HornetQ journal and configuration files. By default, the HornetQ journal is located in the EAP6_HOME /standalone/data/ directory. See Mapping Messaging Folder Names for default messaging folder locations for each release. Make sure that the InQueue JMS queue containing the JMS messages is defined on the JBoss EAP 6.4 server. Make sure that messaging subsystem configuration contains an entry for the RemoteConnectionFactory similar to the following. <connection-factory name="RemoteConnectionFactory"> <entries> <entry name="java:jboss/exported/jms/RemoteConnectionFactory"/> </entries> ... </connection-factory> If it does not contain the entry, create one using the following management CLI command: Configure the Target JBoss EAP 7.x Server The Jakarta Messaging bridge configuration needs the org.hornetq module to connect to the HornetQ server in the release. This module and its direct dependencies are not present in JBoss EAP 7.x, so you must copy the following modules from the release. Copy the org.hornetq module into the JBoss EAP 7.x EAP_HOME /modules/org/ directory. If you did not apply patches to this module, copy this folder from the JBoss EAP 6.4 server: EAP6_HOME /modules/system/layers/base/org/hornetq/ If you did apply patches to this module, copy this folder from the JBoss EAP 6.4 server: EAP6_HOME /modules/system/layers/base/.overlays/layer-base-jboss-eap-6.4.x.CP/org/hornetq/ Remove the <resource-root> for the HornetQ lib path from the JBoss EAP 7.x EAP_HOME /modules/org/hornetq/main/module.xml file. If you did not apply patches to the JBoss EAP 6.4 org.hornetq module, remove the following line from the file: <resource-root path="lib"/> If you did apply patches to the JBoss EAP 6.4 org.hornetq module, remove the following lines from the file: <resource-root path="lib"/> <resource-root path="../../../../../org/hornetq/main/lib"/> Warning Failure to remove the HornetQ lib path resource-root will cause the bridge to fail with the following error in the log file. Copy the org.jboss.netty module into the JBoss EAP 7.x EAP_HOME /modules/org/jboss/ directory. If you did not apply patches to this module, copy this folder from the JBoss EAP 6.4 server: EAP6_HOME /modules/system/layers/base/org/jboss/netty/ If you did apply patches to this module, copy this folder from the JBoss EAP 6.4 server: EAP6_HOME /modules/system/layers/base/.overlays/layer-base-jboss-eap-6.4.x.CP/org/jboss/netty Create the Jakarta Messaging queue to contain the messages received from JBoss EAP 6.4 server. The following is an example of a management CLI command that creates the MigratedMessagesQueue Jakarta Messaging queue to receive the message. This creates the following jms-queue configuration for the default server in the messaging-activemq subsystem of the JBoss EAP 7.x server. <jms-queue name="MigratedMessagesQueue" entries="jms/queue/MigratedMessagesQueue java:jboss/exported/jms/queue/MigratedMessagesQueue"/> Make sure that messaging-activemq subsystem default server contains a configuration for the InVmConnectionFactory connection-factory similar to the following: <connection-factory name="InVmConnectionFactory" factory-type="XA_GENERIC" entries="java:/ConnectionFactory" connectors="in-vm"/> If it does not contain the entry, create one using the following management CLI command: Create and deploy a Jakarta Messaging bridge that reads messages from the InQueue JMS queue configured on the JBoss EAP 6.4 server and transfers them to the MigratedMessagesQueue configured on the JBoss EAP 7.x server. This creates the following jms-bridge configuration in the messaging-activemq subsystem of the JBoss EAP 7.x server. <jms-bridge name="myBridge" add-messageID-in-header="true" max-batch-time="100" max-batch-size="10" max-retries="-1" failure-retry-interval="1000" quality-of-service="AT_MOST_ONCE" module="org.hornetq"> <source destination="jms/queue/InQueue" connection-factory="jms/RemoteConnectionFactory"> <source-context> <property name="java.naming.factory.initial" value="org.wildfly.naming.client.WildFlyInitialContextFactory"/> <property name="java.naming.provider.url" value="remote://127.0.0.1:4447"/> </source-context> </source> <target destination="jms/queue/MigratedMessagesQueue" connection-factory="java:/ConnectionFactory"/> </jms-bridge> If security is configured for JBoss EAP 6.4, you must also configure the messaging bridge configuration <source> element to include a source-context that specifies the correct user name and password to use for the Java Naming and Directory Interface lookup when creating the connection. Migrate the Messaging Data Verify that the information you provided for the following configurations is correct. Any queue and topic names. The java.naming.provider.url for Java Naming and Directory Interface lookup. Make sure that you have deployed the target Jakarta Messaging destination to the JBoss EAP 7.x server. Start both the JBoss EAP 6.4 and JBoss EAP 7.x servers. 4.9.2.3. Mapping Messaging Folder Names The following table shows the messaging directory names used in the release and the corresponding names used in the current release of JBoss EAP. The directories are relative to the jboss.server.data.dir directory, which defaults to EAP_HOME /standalone/data/ if it is not specified. JBoss EAP 6.4 Directory Name JBoss EAP 7.x Directory Name messagingbindings/ activemq/bindings/ messagingjournal/ activemq/journal/ messaginglargemessages/ activemq/largemessages/ messagingpaging/ activemq/paging/ Note The messaginglargemessages/ and messagingpaging/ directories might not be present if there are no large messages or if paging is disabled. 4.9.2.4. Backing Up Messaging Folder Data If your target server has already processed messages, it is a good idea to back up the target message folders to a backup location before you begin. The default location of the messaging folders is EAP_HOME /standalone/data/activemq/ ; however it is configurable. If you are not sure of the location of your messaging data, you can use the following management CLI commands to find the location of the messaging folders. Once you know the location of the folders, copy each folder to a safe backup location. 4.9.3. Migrate Messaging Destinations In JBoss EAP 6, messaging destination queues were configured in the <jms-destinations> element under the <hornetq-server> element in the messaging subsystem. <hornetq-server> ... <jms-destinations> <jms-queue name="testQueue"> <entry name="queue/test"/> <entry name="java:jboss/exported/jms/queue/test"/> </jms-queue> </jms-destinations> ... </hornetq-server> In JBoss EAP 7, the Jakarta Messaging destination queue is configured in the default <server> element of the messaging-activemq subsystem. <server name="default"> ... <jms-queue name="testQueue" entries="queue/test java:jboss/exported/jms/queue/test"/> ... </server> 4.9.4. Migrate Messaging Interceptors Messaging interceptors have changed significantly in JBoss EAP 7 with the replacement of HornetQ with ActiveMQ Artemis as the Jakarta Messaging provider. The HornetQ messaging subsystem included in the release of JBoss EAP required that you install the HornetQ interceptors by adding them to a JAR and then modifying the HornetQ module.xml file. The messaging-activemq subsystem included in JBoss EAP 7 does not require modification of a module.xml file. User interceptor classes, which now implement the Apache ActiveMQ Artemis Interceptor interface, can now be loaded from any server module. You specify the module from which the interceptor should be loaded in the messaging-activemq subsystem of the server configuration file. Example: Interceptor Configuration <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <incoming-interceptors> <class name="com.mycompany.incoming.myInterceptor" module="com.mycompany" /> <class name="com.othercompany.incoming.myOtherInterceptor" module="com.othercompany" /> </incoming-interceptors> <outgoing-interceptors> <class name="com.mycompany.outgoing.myInterceptor" module="com.mycompany" /> <class name="com.othercompany.outgoing.myOtherInterceptor" module="com.othercompany" /> </outgoing-interceptors> </server> </subsystem> 4.9.5. Replace Netty Servlet Configuration In JBoss EAP 6, you could configure a servlet engine to work with the Netty Servlet transport. Because ActiveMQ Artemis replaces HornetQ as the built-in messaging provider in JBoss EAP 7, this configuration is no longer available. You must replace the servlet configuration to use the new built-in messaging HTTP connectors and HTTP acceptors instead. 4.9.6. Configuring a Generic Jakarta Messaging Resource Adapter The way you configure a generic Jakarta Messaging resource adapter for use with a third-party Jakarta Messaging provider has changed in JBoss EAP 7. For more information, see Deploying a Generic Jakarta Messaging Resource Adapter in Configuring Messaging for JBoss EAP. 4.9.7. Messaging Configuration Changes In JBoss EAP 7.0, if you configured the replication-master policy without specifying the check-for-live-server attribute, its default value was false . This has changed in JBoss EAP 7.1 and later. The default value for the check-for-live-server attribute is now true . The following is an example of a management CLI command that configures the replication-master policy without specifying the check-for-live-server attribute. When you read the resource using the management CLI, note that the check-for-live-server attribute value is set to true . 4.9.8. Changes in JMS and Jakarta Messaging Serialization Behavior Between Releases The serialVersionUID of javax.jms.JMSException changed between JMS 1.1 and JMS 2.0.0. This means that if an instance of a JMSException , or any of its subclasses, is serialized using JMS 1.1, it cannot be deserialized using JMS 2.0.0. The reverse is also true. If an instance of JMSException is serialized using JMS 2.0.0, it cannot be deserialized using JMS 1.1. In both of these cases, it throws an exception similar to the following: This issue is fixed in the Jakarta Messaging 2.0.1 maintenance release. Note The JMS 2.0.1 specification is compatible with the Jakarta Messaging 2.0.3 specification. The following table details implementations for each JBoss EAP release. Table 4.4. Implementations for Each JBoss EAP Release JBoss EAP Version Implementation Version 6.4 HornetQ JMS 1.1 7.0 Apache ActiveMQ Artemis JMS 2.0.0 7.1 and later Apache ActiveMQ Artemis Jakarta Messaging 2.0.3 or later Be aware that the serialVersionUID incompatibility can result in a migration issue in the following situations: If you send a message that contains a JMSException using a JBoss EAP 6.4 client, migrate your messaging data to JBoss EAP 7.0, and then attempt to deserialize that message using a JBoss EAP 7.0 client, the deserialization will fail and it will throw an exception. This is because the serialVersionUID in JMS 1.1 is not compatible with the one in JMS 2.0.0. If you send a message that contains a JMSException using a JBoss EAP 7.0 client, migrate your messaging data to JBoss EAP 7.1 or later, and then attempt to deserialize that message using a JBoss EAP 7.1 or later client, the deserialization will fail and it will throw an exception. This is because the serialVersionUID in JMS 2.0.0 is not compatible with the one in Jakarta Messaging 2.0.3 or later. Note that if you send a message that contains a JMSException using a JBoss EAP 6.4 client, migrate your messaging data to JBoss EAP 7.1 or later, and then attempt to deserialize that message using a JBoss EAP 7.1 or later client, the deserialization will succeed because the serialVersionUID in JMS 1.1 is compatible with the one in Jakarta Messaging 2.0.3 or later. Important Red Hat recommends that you do the following before you migrate your messaging data: Be sure to consume all JMS 1.1 messages that contain JMSExceptions before migrating messaging data from JBoss EAP 6.4 to JBoss EAP 7.0. Be sure to consume all Jakarta Messaging 2.0.3 messages that contain Jakarta Messaging Exceptions before migrating messaging data from JBoss EAP 7.0 to JBoss EAP 7.1 or later. 4.10. JMX Management and Jakarta Management Changes The HornetQ component in JBoss EAP 6 provided its own JMX management; however, it was not recommended and is now deprecated and no longer supported. If you relied on this feature in JBoss EAP 6, you must migrate your management tooling to use either the JBoss EAP management CLI or the Jakarta Management management provided with JBoss EAP 7. You must also upgrade your client libraries to use the jboss-client.jar that ships with JBoss EAP 7. The following is an example of HornetQ JMX management code that was used in JBoss EAP 6. JMXConnector connector = null; try { HashMap environment = new HashMap(); String[] credentials = new String[]{"admin", "Password123!"}; environment.put(JMXConnector.CREDENTIALS, credentials); // HornetQ used the protocol "remoting-jmx" and port "9999" JMXServiceURL beanServerUrl = new JMXServiceURL("service:jmx:remoting-jmx://127.0.0.1:9990"); connector = JMXConnectorFactory.connect(beanServerUrl, environment); MBeanServerConnection mbeanServer = connector.getMBeanServerConnection(); // The JMX object name pointed to the HornetQ JMX management ObjectName objectName = new ObjectName("org.hornetq:type=Server,module=JMS"); // The invoked method name was "listConnectionIDs" String[] connections = (String[]) mbeanServer.invoke(objectName, "listConnectionIDs", new Object[]{}, new String[]{}); for (String connection : connections) { System.out.println(connection); } } finally { if (connector != null) { connector.close(); } } The following is an example of the equivalent code needed for ActiveMQ Artemis in JBoss EAP 7. JMXConnector connector = null; try { HashMap environment = new HashMap(); String[] credentials = new String[]{"admin", "Password123!"}; environment.put(JMXConnector.CREDENTIALS, credentials); // ActiveMQ Artemis uses the protocol "remote+http" and port "9990" JMXServiceURL beanServerUrl = new JMXServiceURL("service:jmx:remote+http://127.0.0.1:9990"); connector = JMXConnectorFactory.connect(beanServerUrl, environment); MBeanServerConnection mbeanServer = connector.getMBeanServerConnection(); // The Jakarta Management object name points to the new Jakarta Management in the `messaging-activemq` subsystem ObjectName objectName = new ObjectName("jboss.as:subsystem=messaging-activemq,server=default"); // The invoked method name is now "listConnectionIds" String[] connections = (String[]) mbeanServer.invoke(objectName, "listConnectionIds", new Object[]{}, new String[]{}); for (String connection : connections) { System.out.println(connection); } } finally { if (connector != null) { connector.close(); } } Notice that the method names and parameters have changed in the new implementation. You can find the new method names in the JConsole by following these steps. Connect to the JConsole using the following command. Connect to JBoss EAP local process. Note that it should start with "jboss-modules.jar". In the MBeans tab, choose jboss.as messaging-activemq default Operations to display the list of method names and attributes. 4.11. ORB Server Configuration Changes The JacORB implementation has been replaced with a downstream branch of the OpenJDK ORB in JBoss EAP 7. The org.jboss.as.jacorb extension module, located in EAP_HOME /modules/system/layers/base/ , has been replaced by the org.wildfly.iiop-openjdk extension module. The urn:jboss:domain:jacorb:1.4 subsystem configuration namespace in the server configuration file has been replaced by the urn:jboss:domain:iiop-openjdk:2.1 namespace. The following is an example of the default jacorb system configuration in JBoss EAP 6. <subsystem xmlns="urn:jboss:domain:jacorb:1.4"> <orb socket-binding="jacorb" ssl-socket-binding="jacorb-ssl"> <initializers security="identity" transactions="spec"/> </orb> </subsystem> The following is an example of the default iiop-openjdk subsystem configuration in JBoss EAP 7. <subsystem xmlns="urn:jboss:domain:iiop-openjdk:2.1"> <orb socket-binding="jacorb" ssl-socket-binding="jacorb-ssl" /> <initializers security="identity" transactions="spec" /> </subsystem> The new iiop-openjdk subsystem configuration accepts only a subset of the legacy elements and attributes. The following is an example of a jacorb subsystem configuration in the release of JBoss EAP that contains all valid elements and attributes: <subsystem xmlns="urn:jboss:domain:jacorb:1.4"> <orb name="JBoss" print-version="off" use-imr="off" use-bom="off" cache-typecodes="off" cache-poa-names="off" giop-minor-version="2" socket-binding="jacorb" ssl-socket-binding="jacorb-ssl"> <connection retries="5" retry-interval="500" client-timeout="0" server-timeout="0" max-server-connections="500" max-managed-buf-size="24" outbuf-size="2048" outbuf-cache-timeout="-1"/> <initializers security="off" transactions="spec"/> </orb> <poa monitoring="off" queue-wait="on" queue-min="10" queue-max="100"> <request-processors pool-size="10" max-threads="32"/> </poa> <naming root-context="JBoss/Naming/root" export-corbaloc="on"/> <interop sun="on" comet="off" iona="off" chunk-custom-rmi-valuetypes="on" lax-boolean-encoding="off" indirection-encoding-disable="off" strict-check-on-tc-creation="off"/> <security support-ssl="off" add-component-via-interceptor="on" client-supports="MutualAuth" client-requires="None" server-supports="MutualAuth" server-requires="None"/> <properties> <property name="some_property" value="some_value"/> </properties> </subsystem> The following element attributes are no longer supported and must be removed. Table 4.5. Attributes to Remove Element Unsupported Attributes <orb> client-timeout max-managed-buf-size max-server-connections outbuf-cache-timeout outbuf-size connection retries retry-interval name server-timeout <poa> queue-min queue-max pool-size max-threads The following on/off attributes are no longer supported and will not be migrated when you run the management CLI migrate operation. If they are set to on , you will get a migration warning. Other on/off attributes that are not mentioned in this table, for example <security support-ssl="on|off"> , are still supported and will be migrated successfully. The only difference is that their values will be changed from on/off to true/false . Table 4.6. Attributes to Turn Off or Remove Element Attributes to Set to Off <orb> cache-poa-names cache-typecodes print-version use-bom use-imr <interop> (all except sun ) comet iona chunk-custom-rmi-valuetypes indirection-encoding-disable lax-boolean-encoding strict-check-on-tc-creation <poa> monitoring queue-wait 4.12. Migrate the Threads Subsystem Configuration The JBoss EAP 6 server configuration included a threads subsystem that was used to manage thread pools across the various subsystems in the server. The threads subsystem is no longer available in JBoss EAP 7. Instead, each subsystem is responsible for managing its own thread pools. For information about how to configure thread pools for the infinispan subsystem, see Configure Infinispan Thread Pools in the JBoss EAP Configuration Guide . For information about how to configure thread pools for the jgroups subsystem, see Configure JGroups Thread Pools in the JBoss EAP Configuration Guide . In JBoss EAP 6, you configured thread pools for connectors and listeners for the web subsystem by referencing an executor that was defined in the threads subsystem. In JBoss EAP 7, you now configure thread pools for the undertow subsystem by referencing a worker that is defined in the io subsystem. For more information, see Configuring the IO Subsystem in the JBoss EAP Configuration Guide . For information about about changes to thread pool configuration in the remoting subsystem, see Migrate the Remoting Subsystem Configuration in this guide, and Configuring the Endpoint in the JBoss EAP Configuration Guide . 4.13. Migrate the Remoting Subsystem Configuration In JBoss EAP 6, you configured the thread pool for the remoting subsystem by setting various worker-* attributes. The worker thread pool is no longer configured in the remoting subsystem in JBoss EAP 7 and if you attempt to modify the existing configuration, you will see the following message. In JBoss EAP 7, the worker thread pool is replaced by an endpoint configuration that references a worker defined in the io subsystem. For information about how to configure the endpoint, see Configuring the Endpoint in the JBoss EAP Configuration Guide . 4.14. WebSocket Server Configuration Changes To use WebSockets in JBoss EAP 6, you had to enable the non-blocking NIO2 connector protocol for the http connector in the web subsystem of the JBoss EAP server configuration file using a command similar to the following. To use WebSockets in an application, you also had to create a <enable-websockets> element in the application WEB-INF/jboss-web.xml file and set it to true . In JBoss EAP 7, you no longer need to configure the server for default WebSocket support or configure the application to use it. WebSockets are a requirement of the Jakarta EE 8 specification and the required protocols are configured by default. More complex WebSocket configuration is done in the servlet-container of the undertow subsystem of the JBoss EAP server configuration file. You can view the available settings using the following command. For more information about WebSocket development, see Creating WebSocket Applications in the JBoss EAP Development Guide . WebSocket code examples can also be found in the quickstarts that ship with JBoss EAP. 4.15. Single Sign-on Server Changes The infinispan subsystem still provides distributed caching support for HA services in the form of Infinispan caches in JBoss EAP 7; however the caching and distribution of authentication information is handled differently than in releases. In JBoss EAP 6, if single sign-on (SSO) was not provided an Infinispan cache, the cache was not distributed. In JBoss EAP 7, SSO is distributed automatically when you select the HA profile. When running the HA profile, each host has its own Infinispan cache, which is based on the default cache of the web cache container. This cache stores the relevant session and SSO cookie information for the host. JBoss EAP handles propagation of individual cache information to all hosts. There is no way to specifically assign an Infinispan cache to SSO in JBoss EAP 7. In JBoss EAP 7, SSO is configured in the undertow subsystem of the server configuration file. There are no application code changes required for SSO when migrating to JBoss EAP 7. 4.16. DataSource Configuration Changes 4.16.1. JDBC Datasource Driver Name When you configured a datasource in the release of JBoss EAP, the value specified for the driver name depended on the number of classes listed in the META-INF/services/java.sql.Driver file contained in the JDBC driver JAR. Driver Containing a Single Class If the META-INF/services/java.sql.Driver file specified only one class, the driver name value was simply the name of the JDBC driver JAR. This has not changed in JBoss EAP 7. Driver Containing Multiple Classes In JBoss EAP 6, if there was more than one class listed in META-INF/services/java.sql.Driver file, you specified which class was the driver class by appending its name to the JAR name, along with the major and minor version, in the following format. In JBoss EAP 7, this has changed. You now specify the driver name using the following format. Note An underscore has been added between the JAR_NAME and the DRIVER_CLASS_NAME . The MySQL 5.1.31 JDBC driver is an example of a driver that contains two classes. The driver class name is com.mysql.jdbc.Driver . The following examples demonstrate the differences between how you specify the driver name in the and current release of JBoss EAP. Example: JBoss EAP 6 Driver Name Example: JBoss EAP 7 Driver Name 4.17. Security Server Configuration Changes If you migrate to JBoss EAP 7 and plan to run with the Java Security Manager enabled, you should be aware that changes were made in the way policies are defined and that additional configuration changes might be needed. Also be aware that custom security managers are not supported in JBoss EAP 7. For information about Java Security Manager server configuration changes, see Considerations Moving from Versions in How to Configure Server Security for JBoss EAP. 4.17.1. Changes in Legacy Security Behavior between JBoss EAP 7.0 and JBoss EAP 7.1 4.17.1.1. HTTP Status Change for Unreachable LDAP Realms If no LDAP realm was reachable by the server in JBoss EAP 7.0, the security subsystem returned an HTTP status code of "401 Unauthorized". The legacy security subsystem in JBoss EAP 7.1 and later instead return an HTTP status code of "500 Internal Error" to more accurately describe that an unexpected condition occurred that prevented the server from successfully processing the request. 4.17.1.2. Enabling the LDAP Security Realm to Parse Roles from a DN In JBoss EAP 7.0, the org.jboss.as.domain.management.security.parseGroupNameFromLdapDN system property was used to enable the LDAP security realm to parse for roles from a DN. When this property was set to true , roles were parsed from a DN. Otherwise, a normal LDAP search was used to search for roles. In JBoss EAP 7.1 and later, this system property is deprecated. Instead, you configure this option by setting the newly introduced parse-group-name-from-dn attribute to true in the core service path using the following management CLI command: 4.17.1.3. Changes in Sending the JBoss EAP SSL Certificate to an LDAP Server In JBoss EAP 7.0, when the management interface is configured to use the ldapSSL security realm, mutual authentication between the server and LDAP can fail, resulting in an authentication failure in the management interface. This is because two different LDAP connections are made, each by a different thread, and they do not share the SSL sessions. JBoss EAP 7.1 introduced a new boolean always-send-client-cert management attribute on the LDAP outbound-connection . This option allows configuration of outbound LDAP connections to support LDAP servers that are configured to always require a client certificate. LDAP authentication happens in two steps: It searches for the account. It verifies the credentials. By default, the always-send-client-cert attribute is set to false , meaning the client SSL certificate is sent only with the first search account request. When this attribute is set to true , the JBoss EAP LDAP client sends the client certificate to the LDAP server with both the search and verification requests. You can set this attribute to true using the following management CLI command. This results in the following LDAP outbound connection in the server configuration file. <management> .... <outbound-connections> <ldap name="my-ldap-connection" url="ldap://127.0.0.1:389" search-dn="cn=search,dc=myCompany,dc=com" search-credential="myPass" always-send-client-cert="true"/> </outbound-connections> .... </management> 4.17.2. FIPS Mode Changes If you are running in FIPS mode, be aware that the default behavior has changed between JBoss EAP 7.0 and JBoss EAP 7.1. When using legacy security realms, JBoss EAP 7.1 and later provide the automatic generation of a self-signed certificate for development purposes. This feature, which was not available in JBoss EAP 7.0, is enabled by default. This means that if you are running in FIPS mode, you must configure the server to disable automatic self-signed certificate creation. Otherwise, you might see the following error when you start the server. For information about automatic self-signed certificate creation, see Automatic Self-signed Certificate Creation for Applications in How to Configure Server Security for JBoss EAP. 4.18. Transactions Subsystem Changes Some Transaction Manager configuration attributes that were available in the transactions subsystem in JBoss EAP 6 have changed in JBoss EAP 7. Removed Transactions Subsystem Attributes The following table lists the JBoss EAP 6 attributes that were removed from the transactions subsystem in JBoss EAP 7 and the equivalent replacement attributes. Attribute in JBoss EAP 6 Replacement in JBoss EAP 7 path object-store-path relative-to object-store-relative-to Deprecated Transactions Subsystem Attributes The following attributes that were available in the transactions subsystem in JBoss EAP 6 are deprecated in JBoss EAP 7. The deprecated attributes might be removed in a future release of the product. The following table lists the equivalent replacement attributes. Attribute in JBoss EAP 6 Replacement in JBoss EAP 7 use-hornetq-store use-journal-store hornetq-store-enable-async-io journal-store-enable-async-io enable-statistics statistics-enabled 4.19. Changes to mod_cluster Configuration The configuration for static proxy lists in mod_cluster has changed in JBoss EAP 7. In JBoss EAP 6, you configured the proxy-list attribute, which was a comma-separated list of httpd proxy addresses specified in the format of hostname:port . The proxy-list attribute is deprecated in JBoss EAP 7. It has been replaced by the proxies attribute, which is a list of outbound socket binding names. This change impacts how you define a static proxy list, for example, when disabling advertising for mod_cluster. For information about how to disable advertising for mod_cluster, see Disable Advertising for mod_cluster in the JBoss EAP Configuration Guide . For more information about mod_cluster attributes, see ModCluster Subsystem Attributes in the JBoss EAP Configuration Guide . 4.20. Viewing Configuration Changes JBoss EAP 7 provides the ability to track configuration changes made to the running server. This allows administrators to view a history of configuration changes made by authorized users. In JBoss EAP 7.0, you must use the core-service management CLI command to configure options and to list recent configuration changes. Example: List Configuration Changes in JBoss EAP 7.0 JBoss EAP 7.1 introduced a new core-management subsystem that can be configured to track configuration changes made to the running server. This is the preferred method of configuring and viewing configuration changes in JBoss EAP 7.1 and later. Example: List Configuration Changes in JBoss EAP 7.1 and Later For more information about using the new core-management subsystem introduced in JBoss EAP 7.1, see View Configuration Changes in the JBoss EAP Configuration Guide .
[ "cp EAP6_HOME /standalone/configuration/standalone-full.xml EAP7_HOME /standalone/configuration", "bin/standalone.sh -c standalone-full.xml --start-mode=admin-only", "bin/jboss-cli.sh --connect --controller=remote://localhost:9990", "/subsystem= SUBSYSTEM_NAME :describe-migration", "/subsystem=messaging:describe-migration { \"outcome\" => \"success\", \"result\" => { \"migration-warnings\" => [], \"migration-operations\" => [ { \"operation\" => \"add\", \"address\" => [(\"extension\" => \"org.wildfly.extension.messaging-activemq\")], \"module\" => \"org.wildfly.extension.messaging-activemq\" }, { \"operation\" => \"add\", \"address\" => [(\"subsystem\" => \"messaging-activemq\")] }, <!-- *** Entries removed for readability *** --> { \"operation\" => \"remove\", \"address\" => [(\"subsystem\" => \"messaging\")] }, { \"operation\" => \"remove\", \"address\" => [(\"extension\" => \"org.jboss.as.messaging\")] } ] } }", "/subsystem= SUBSYSTEM_NAME :migrate", "/subsystem=messaging:migrate { \"outcome\" => \"success\", \"result\" => {\"migration-warnings\" => []} }", "/subsystem=messaging:migrate { \"outcome\" => \"success\", \"result\" => {\"migration-warnings\" => [ \"WFLYMSG0080: Could not migrate attribute group-address from resource [ (\\\"subsystem\\\" => \\\"messaging-activemq\\\"), (\\\"server\\\" => \\\"default\\\"), (\\\"broadcast-group\\\" => \\\"groupB\\\") ]. Use instead the socket-binding attribute to configure this broadcast-group.\", \"WFLYMSG0080: Could not migrate attribute group-port from resource [ (\\\"subsystem\\\" => \\\"messaging-activemq\\\"), (\\\"server\\\" => \\\"default\\\"), (\\\"broadcast-group\\\" => \\\"groupB\\\") ]. Use instead the socket-binding attribute to configure this broadcast-group.\", \"WFLYMSG0080: Could not migrate attribute local-bind-address from resource [ (\\\"subsystem\\\" => \\\"messaging-activemq\\\"), (\\\"server\\\" => \\\"default\\\"), (\\\"broadcast-group\\\" => \\\"groupA\\\") ]. Use instead the socket-binding attribute to configure this broadcast-group.\", \"WFLYMSG0080: Could not migrate attribute local-bind-port from resource [ (\\\"subsystem\\\" => \\\"messaging-activemq\\\"), (\\\"server\\\" => \\\"default\\\"), (\\\"broadcast-group\\\" => \\\"groupA\\\") ]. Use instead the socket-binding attribute to configure this broadcast-group.\", \"WFLYMSG0080: Could not migrate attribute group-address from resource [ (\\\"subsystem\\\" => \\\"messaging-activemq\\\"), (\\\"server\\\" => \\\"default\\\"), (\\\"broadcast-group\\\" => \\\"groupA\\\") ]. Use instead the socket-binding attribute to configure this broadcast-group.\", \"WFLYMSG0080: Could not migrate attribute group-port from resource [ (\\\"subsystem\\\" => \\\"messaging-activemq\\\"), (\\\"server\\\" => \\\"default\\\"), (\\\"broadcast-group\\\" => \\\"groupA\\\") ]. Use instead the socket-binding attribute to configure this broadcast-group.\" ]} }", "/subsystem=jacorb:migrate /subsystem=messaging:migrate /subsystem=web:migrate", "/subsystem=cmp:remove /extension=org.jboss.as.cmp:remove /subsystem=jaxr:remove /extension=org.jboss.as.jaxr:remove /subsystem=threads:remove /extension=org.jboss.as.threads:remove", "<subsystem xmlns=\"urn:jboss:domain:web:2.2\" default-virtual-server=\"default-host\" native=\"false\"> <connector name=\"http\" protocol=\"HTTP/1.1\" scheme=\"http\" socket-binding=\"http\"/> <virtual-server name=\"default-host\" enable-welcome-root=\"true\"> <alias name=\"localhost\"/> <alias name=\"example.com\"/> </virtual-server> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:web:2.2\" default-virtual-server=\"default\" native=\"false\"> <virtual-server name=\"default\" enable-welcome-root=\"true\"> <alias name=\"localhost\"/> <rewrite name=\"test\" pattern=\"(.*)/toberewritten/(.*)\" substitution=\"USD1/rewritten/USD2\" flags=\"NC\"/> <rewrite name=\"test2\" pattern=\"(.*)\" substitution=\"-\" flags=\"F\"> <condition name=\"get\" test=\"%{REQUEST_METHOD}\" pattern=\"GET\"/> <condition name=\"andCond\" test=\"%{REQUEST_URI}\" pattern=\".*index.html\" flags=\"NC\"/> </rewrite> </virtual-server> </subsystem>", "/subsystem=web:migrate", "/subsystem=web:migrate { \"outcome\" => \"success\", \"result\" => {\"migration-warnings\" => [ \"WFLYWEB0002: Could not migrate resource { \\\"pattern\\\" => \\\"(.*)\\\", \\\"substitution\\\" => \\\"-\\\", \\\"flags\\\" => \\\"F\\\", \\\"operation\\\" => \\\"add\\\", \\\"address\\\" => [ (\\\"subsystem\\\" => \\\"web\\\"), (\\\"virtual-server\\\" => \\\"default-host\\\"), (\\\"rewrite\\\" => \\\"test2\\\") ] }\", \"WFLYWEB0002: Could not migrate resource { \\\"test\\\" => \\\"%{REQUEST_METHOD}\\\", \\\"pattern\\\" => \\\"GET\\\", \\\"flags\\\" => undefined, \\\"operation\\\" => \\\"add\\\", \\\"address\\\" => [ (\\\"subsystem\\\" => \\\"web\\\"), (\\\"virtual-server\\\" => \\\"default-host\\\"), (\\\"rewrite\\\" => \\\"test2\\\"), (\\\"condition\\\" => \\\"get\\\") ] }\", \"WFLYWEB0002: Could not migrate resource { \\\"test\\\" => \\\"%{REQUEST_URI}\\\", \\\"pattern\\\" => \\\".*index.html\\\", \\\"flags\\\" => \\\"NC\\\", \\\"operation\\\" => \\\"add\\\", \\\"address\\\" => [ (\\\"subsystem\\\" => \\\"web\\\"), (\\\"virtual-server\\\" => \\\"default-host\\\"), (\\\"rewrite\\\" => \\\"test2\\\"), (\\\"condition\\\" => \\\"andCond\\\") ] }\" ]} }", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"http\" socket-binding=\"http\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost, example.com\"> <location name=\"/\" handler=\"welcome-content\"/> </host> </server> <servlet-container name=\"default\"> <jsp-config/> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> </subsystem>", "Create the filters /subsystem=undertow/configuration=filter/expression-filter=\"test1\":add(expression=\"path('(.*)/toberewritten/(.*)') -> rewrite('USD1/rewritten/USD2')\") /subsystem=undertow/configuration=filter/expression-filter=\"test2\":add(expression=\"method('GET') and path('.*index.html') -> response-code(403)\") Add the filters to the default server /subsystem=undertow/server=default-server/host=default-host/filter-ref=\"test1\":add /subsystem=undertow/server=default-server/host=default-host/filter-ref=\"test2\":add", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"http\" socket-binding=\"http\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost, example.com\"> <location name=\"/\" handler=\"welcome-content\"/> <filter-ref name=\"test1\"/> <filter-ref name=\"test2\"/> </host> </server> <servlet-container name=\"default\"> <jsp-config/> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> <filters> <expression-filter name=\"test1\" expression=\"path('(.*)/toberewritten/(.*)') -> rewrite('USD1/rewritten/USD2')\"/> <expression-filter name=\"test2\" expression=\"method('GET') and path('.*index.html') -> response-code(403)\"/> </filters> </subsystem>", "<access-log pattern=\"%h %l %u %t &quot;%T sec&quot; &quot;%r&quot; %s %b &quot;%{Referer}i&quot; &quot;%{User-agent}i&quot;\"/>", "<access-log pattern=\"%h %l %u %t &quot;%T sec&quot; &quot;%r&quot; %s %b &quot;%{i,Referer}&quot; &quot;%{i,User-Agent}&quot;\"/>", "<valve name=\"jdbc\" module=\"org.jboss.as.web\" class-name=\"org.apache.catalina.valves.JDBCAccessLogValve\"> <param param-name=\"driverName\" param-value=\"com.mysql.jdbc.Driver\" /> <param param-name=\"connectionName\" param-value=\"root\" /> <param param-name=\"connectionPassword\" param-value=\"password\" /> <param param-name=\"connectionURL\" param-value=\"jdbc:mysql://localhost:3306/wildfly?zeroDateTimeBehavior=convertToNull\" /> <param param-name=\"format\" param-value=\"combined\" /> </valve>", "<datasources> <datasource jndi-name=\"java:jboss/datasources/accessLogDS\" pool-name=\"accessLogDS\" enabled=\"true\" use-java-context=\"true\"> <connection-url>jdbc:mysql://localhost:3306/wildfly?zeroDateTimeBehavior=convertToNull</connection-url> <driver>mysql</driver> <security> <user-name>root</user-name> <password>Password1!</password> </security> </datasource> <drivers> <driver name=\"mysql\" module=\"com.mysql\"> <driver-class>com.mysql.jdbc.Driver</driver-class> </driver> </drivers> </datasources>", "<filters> <expression-filter name=\"jdbc-access\" expression=\"jdbc-access-log(datasource='java:jboss/datasources/accessLogDS')\" /> </filters>", "/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=rfc6265-cookie-validation,value=true)", "/subsystem=undertow/server=default-server/http-listener=default:read-resource { \"outcome\" => \"success\", \"result\" => { \"allow-encoded-slash\" => false, \"allow-equals-in-cookie-value\" => false, \"allow-unescaped-characters-in-url\" => false, \"always-set-keep-alive\" => true, \"buffer-pipelined-data\" => false, \"buffer-pool\" => \"default\", \"certificate-forwarding\" => false, \"decode-url\" => true, \"disallowed-methods\" => [\"TRACE\"], } }", "/subsystem=undertow/server=default-server/http-listener=default:list-remove(name=disallowed-methods,value=\"TRACE\")", "/subsystem=undertow/server=default-server/http-listener=default:read-resource { \"outcome\" => \"success\", \"result\" => { \"allow-encoded-slash\" => false, \"allow-equals-in-cookie-value\" => false, \"allow-unescaped-characters-in-url\" => false, \"always-set-keep-alive\" => true, \"buffer-pipelined-data\" => false, \"buffer-pool\" => \"default\", \"certificate-forwarding\" => false, \"decode-url\" => true, \"disallowed-methods\" => [], } }", "/subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value=ROOT)", "/subsystem=undertow/server=default-server/host=default-host/location=\\/:remove /subsystem=undertow/configuration=handler/file=welcome-content:remove reload", "<subsystem xmlns=\"urn:jboss:domain:undertow:4.0\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <filter-ref name=\"server-header\"/> <filter-ref name=\"x-powered-by-header\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> <servlet-container name=\"default\"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> <filters> <response-header name=\"server-header\" header-name=\"Server\" header-value=\"JBoss-EAP/7\"/> <response-header name=\"x-powered-by-header\" header-name=\"X-Powered-By\" header-value=\"Undertow/1\"/> </filters> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> <servlet-container name=\"default\"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> </subsystem>", "/subsystem=infinispan/cache-container=my:add() /subsystem=infinispan/cache-container=my/transport=jgroups:add() /subsystem=infinispan/cache-container=my/invalidation-cache=mycache:add(mode=SYNC)", "batch /subsystem=infinispan/cache-container=my:add() /subsystem=infinispan/cache-container=my/transport=jgroups:add() /subsystem=infinispan/cache-container=my/invalidation-cache=mycache:add(mode=SYNC) run-batch", "ERROR [org.jboss.msc.service.fail] (MSC service thread 1-3) MSC000001: Failed to start service jboss.deployment.unit.\"mdb-1.0-SNAPSHOT.jar\".cache-dependencies-installer: org.jboss.msc.service.StartException in service jboss.deployment.unit.\"mdb-1.0-SNAPSHOT.jar\".cache-dependencies-installer: Failed to start service Caused by: org.jboss.msc.service.DuplicateServiceException: Service jboss.infinispan.ejb.\"mdb-1.0-SNAPSHOT.jar\".config is already registered", "/subsystem=ejb3/file-passivation-store=file:remove /subsystem=ejb3/cluster-passivation-store=infinispan:remove /subsystem=ejb3/passivation-store=infinispan:add(cache-container=ejb, max-size=10000) /subsystem=ejb3/cache=passivating:remove /subsystem=ejb3/cache=clustered:remove /subsystem=ejb3/cache=distributable:add(passivation-store=infinispan, aliases=[passivating, clustered])", "/subsystem=messaging:migrate", "/subsystem=messaging:describe-migration", "/subsystem=messaging:describe-migration(add-legacy-entries=true) /subsystem=messaging:migrate(add-legacy-entries=true)", "java -jar -mp MODULE_PATH org.hornetq.exporter MESSAGING_BINDINGS_DIRECTORY MESSAGING_JOURNAL_DIRECTORY MESSAGING_PAGING_DIRECTORY MESSAGING_LARGE_MESSAGES_DIRECTORY > OUTPUT_DATA .xml", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <module xmlns=\"urn:jboss:module:1.1\" name=\"org.hornetq.exporter\"> <main-class name=\"org.hornetq.jms.persistence.impl.journal.XmlDataExporter\"/> <properties> <property name=\"jboss.api\" value=\"deprecated\"/> </properties> <dependencies> <module name=\"org.hornetq\"/> </dependencies> </module>", "java -jar jboss-modules.jar -mp modules/ org.hornetq.exporter standalone/data/messagingbindings/ standalone/data/messagingjournal/ standalone/data/messagingpaging standalone/data/messaginglargemessages/ > OUTPUT_DIRECTORY /OldMessagingData.xml", "EAP_HOME /bin/standalone.sh -c standalone-full.xml --start-mode=admin-only", "EAP_HOME /bin/jboss-cli.sh --connect", "/subsystem=messaging-activemq/server=default:export-journal()", "EAP_HOME /bin/jboss-cli.sh --connect", "/subsystem=messaging-activemq/server=default:import-journal(file= OUTPUT_DIRECTORY /OldMessagingData.xml)", "<connection-factory name=\"RemoteConnectionFactory\"> <entries> <entry name=\"java:jboss/exported/jms/RemoteConnectionFactory\"/> </entries> </connection-factory>", "/subsystem=messaging/hornetq-server=default/connection-factory=RemoteConnectionFactory:add(factory-type=XA_GENERIC, connector=[netty], entries=[java:jboss/exported/jms/RemoteConnectionFactory],ha=true,block-on-acknowledge=true,retry-interval=1000,retry-interval-multiplier=1.0,reconnect-attempts=-1)", "<resource-root path=\"lib\"/>", "<resource-root path=\"lib\"/> <resource-root path=\"../../../../../org/hornetq/main/lib\"/>", "2016-07-15 09:32:25,660 ERROR [org.jboss.as.controller.management-operation] (management-handler-thread - 2) WFLYCTL0013: Operation (\"add\") failed - address: ([ (\"subsystem\" => \"messaging-activemq\"), (\"jms-bridge\" => \"myBridge\") ]) - failure description: \"WFLYMSGAMQ0086: Unable to load module org.hornetq\"", "jms-queue add --queue-address=MigratedMessagesQueue --entries=[jms/queue/MigratedMessagesQueue java:jboss/exported/jms/queue/MigratedMessagesQueue]", "<jms-queue name=\"MigratedMessagesQueue\" entries=\"jms/queue/MigratedMessagesQueue java:jboss/exported/jms/queue/MigratedMessagesQueue\"/>", "<connection-factory name=\"InVmConnectionFactory\" factory-type=\"XA_GENERIC\" entries=\"java:/ConnectionFactory\" connectors=\"in-vm\"/>", "/subsystem=messaging-activemq/server=default/connection-factory=InVmConnectionFactory:add(factory-type=XA_GENERIC, connectors=[in-vm], entries=[java:/ConnectionFactory])", "/subsystem=messaging-activemq/jms-bridge=myBridge:add(add-messageID-in-header=true,max-batch-time=100,max-batch-size=10,max-retries=-1,failure-retry-interval=1000,quality-of-service=AT_MOST_ONCE,module=org.hornetq,source-destination=jms/queue/InQueue,source-connection-factory=jms/RemoteConnectionFactory,source-context=[(\"java.naming.factory.initial\"=>\"org.wildfly.naming.client.WildFlyInitialContextFactory\"),(\"java.naming.provider.url\"=>\"remote://127.0.0.1:4447\")],target-destination=jms/queue/MigratedMessagesQueue,target-connection-factory=java:/ConnectionFactory)", "<jms-bridge name=\"myBridge\" add-messageID-in-header=\"true\" max-batch-time=\"100\" max-batch-size=\"10\" max-retries=\"-1\" failure-retry-interval=\"1000\" quality-of-service=\"AT_MOST_ONCE\" module=\"org.hornetq\"> <source destination=\"jms/queue/InQueue\" connection-factory=\"jms/RemoteConnectionFactory\"> <source-context> <property name=\"java.naming.factory.initial\" value=\"org.wildfly.naming.client.WildFlyInitialContextFactory\"/> <property name=\"java.naming.provider.url\" value=\"remote://127.0.0.1:4447\"/> </source-context> </source> <target destination=\"jms/queue/MigratedMessagesQueue\" connection-factory=\"java:/ConnectionFactory\"/> </jms-bridge>", "/subsystem=messaging-activemq/server=default/path=journal-directory:resolve-path /subsystem=messaging-activemq/server=default/path=paging-directory:resolve-path /subsystem=messaging-activemq/server=default/path=bindings-directory:resolve-path /subsystem=messaging-activemq/server=default/path=large-messages-directory:resolve-path", "<hornetq-server> <jms-destinations> <jms-queue name=\"testQueue\"> <entry name=\"queue/test\"/> <entry name=\"java:jboss/exported/jms/queue/test\"/> </jms-queue> </jms-destinations> </hornetq-server>", "<server name=\"default\"> <jms-queue name=\"testQueue\" entries=\"queue/test java:jboss/exported/jms/queue/test\"/> </server>", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <incoming-interceptors> <class name=\"com.mycompany.incoming.myInterceptor\" module=\"com.mycompany\" /> <class name=\"com.othercompany.incoming.myOtherInterceptor\" module=\"com.othercompany\" /> </incoming-interceptors> <outgoing-interceptors> <class name=\"com.mycompany.outgoing.myInterceptor\" module=\"com.mycompany\" /> <class name=\"com.othercompany.outgoing.myOtherInterceptor\" module=\"com.othercompany\" /> </outgoing-interceptors> </server> </subsystem>", "/subsystem=messaging-activemq/server=default/ha-policy=replication-master:add(cluster-name=my-cluster,group-name=group1)", "/subsystem=messaging-activemq/server=default/ha-policy=replication-master:read-resource(recursive=true) { \"outcome\" => \"success\", \"result\" => { \"check-for-live-server\" => true, \"cluster-name\" => \"my-cluster\", \"group-name\" => \"group1\", \"initial-replication-sync-timeout\" => 30000L }, \"response-headers\" => {\"process-state\" => \"reload-required\"} }", "javax.jms.JMSException: javax.jms.JMSException; local class incompatible: stream classdesc serialVersionUID = 8951994251593378324, local class serialVersionUID = 2368476267211489441", "JMXConnector connector = null; try { HashMap environment = new HashMap(); String[] credentials = new String[]{\"admin\", \"Password123!\"}; environment.put(JMXConnector.CREDENTIALS, credentials); // HornetQ used the protocol \"remoting-jmx\" and port \"9999\" JMXServiceURL beanServerUrl = new JMXServiceURL(\"service:jmx:remoting-jmx://127.0.0.1:9990\"); connector = JMXConnectorFactory.connect(beanServerUrl, environment); MBeanServerConnection mbeanServer = connector.getMBeanServerConnection(); // The JMX object name pointed to the HornetQ JMX management ObjectName objectName = new ObjectName(\"org.hornetq:type=Server,module=JMS\"); // The invoked method name was \"listConnectionIDs\" String[] connections = (String[]) mbeanServer.invoke(objectName, \"listConnectionIDs\", new Object[]{}, new String[]{}); for (String connection : connections) { System.out.println(connection); } } finally { if (connector != null) { connector.close(); } }", "JMXConnector connector = null; try { HashMap environment = new HashMap(); String[] credentials = new String[]{\"admin\", \"Password123!\"}; environment.put(JMXConnector.CREDENTIALS, credentials); // ActiveMQ Artemis uses the protocol \"remote+http\" and port \"9990\" JMXServiceURL beanServerUrl = new JMXServiceURL(\"service:jmx:remote+http://127.0.0.1:9990\"); connector = JMXConnectorFactory.connect(beanServerUrl, environment); MBeanServerConnection mbeanServer = connector.getMBeanServerConnection(); // The Jakarta Management object name points to the new Jakarta Management in the `messaging-activemq` subsystem ObjectName objectName = new ObjectName(\"jboss.as:subsystem=messaging-activemq,server=default\"); // The invoked method name is now \"listConnectionIds\" String[] connections = (String[]) mbeanServer.invoke(objectName, \"listConnectionIds\", new Object[]{}, new String[]{}); for (String connection : connections) { System.out.println(connection); } } finally { if (connector != null) { connector.close(); } }", "EAP_HOME /bin/jconsole.sh", "<subsystem xmlns=\"urn:jboss:domain:jacorb:1.4\"> <orb socket-binding=\"jacorb\" ssl-socket-binding=\"jacorb-ssl\"> <initializers security=\"identity\" transactions=\"spec\"/> </orb> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:iiop-openjdk:2.1\"> <orb socket-binding=\"jacorb\" ssl-socket-binding=\"jacorb-ssl\" /> <initializers security=\"identity\" transactions=\"spec\" /> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:jacorb:1.4\"> <orb name=\"JBoss\" print-version=\"off\" use-imr=\"off\" use-bom=\"off\" cache-typecodes=\"off\" cache-poa-names=\"off\" giop-minor-version=\"2\" socket-binding=\"jacorb\" ssl-socket-binding=\"jacorb-ssl\"> <connection retries=\"5\" retry-interval=\"500\" client-timeout=\"0\" server-timeout=\"0\" max-server-connections=\"500\" max-managed-buf-size=\"24\" outbuf-size=\"2048\" outbuf-cache-timeout=\"-1\"/> <initializers security=\"off\" transactions=\"spec\"/> </orb> <poa monitoring=\"off\" queue-wait=\"on\" queue-min=\"10\" queue-max=\"100\"> <request-processors pool-size=\"10\" max-threads=\"32\"/> </poa> <naming root-context=\"JBoss/Naming/root\" export-corbaloc=\"on\"/> <interop sun=\"on\" comet=\"off\" iona=\"off\" chunk-custom-rmi-valuetypes=\"on\" lax-boolean-encoding=\"off\" indirection-encoding-disable=\"off\" strict-check-on-tc-creation=\"off\"/> <security support-ssl=\"off\" add-component-via-interceptor=\"on\" client-supports=\"MutualAuth\" client-requires=\"None\" server-supports=\"MutualAuth\" server-requires=\"None\"/> <properties> <property name=\"some_property\" value=\"some_value\"/> </properties> </subsystem>", "WFLYRMT0022: Worker configuration is no longer used, please use endpoint worker configuration", "/subsystem=web/connector=http/:write-attribute(name=protocol,value=org.apache.coyote.http11.Http11NioProtocol)", "/subsystem=undertow/servlet-container=default/setting=websockets:read-resource(recursive=true) { \"outcome\" => \"success\", \"result\" => { \"buffer-pool\" => \"default\", \"dispatch-to-worker\" => true, \"worker\" => \"default\" } }", "JAR_NAME + DRIVER_CLASS_NAME + \"_\" + MAJOR_VERSION + \"_\" + MINOR_VERSION", "JAR_NAME + \"_\" + DRIVER_CLASS_NAME + \"_\" + MAJOR_VERSION + \"_\" + MINOR_VERSION", "mysql-connector-java-5.1.31-bin.jarcom.mysql.jdbc.Driver_5_1", "mysql-connector-java-5.1.31-bin.jar_com.mysql.jdbc.Driver_5_1", "/core-service=management/security-realm= REALM_NAME /authorization=ldap/group-search=principal-to-group:add(parse-group-name-from-dn=true)", "/core-service=management/ldap-connection=my-ldap-connection:write-attribute(name=always-send-client-cert,value=true)", "<management> . <outbound-connections> <ldap name=\"my-ldap-connection\" url=\"ldap://127.0.0.1:389\" search-dn=\"cn=search,dc=myCompany,dc=com\" search-credential=\"myPass\" always-send-client-cert=\"true\"/> </outbound-connections> . </management>", "ERROR [org.xnio.listener] (default I/O-6) XNIO001007: A channel event listener threw an exception: java.lang.RuntimeException: WFLYDM0114: Failed to lazily initialize SSL context Caused by: java.lang.RuntimeException: WFLYDM0112: Failed to generate self signed certificate Caused by: java.security.KeyStoreException: Cannot get key bytes, not PKCS#8 encoded", "/core-service=management/service=configuration-changes:add(max-history=10) /core-service=management/service=configuration-changes:list-changes", "/subsystem=core-management/service=configuration-changes:add(max-history=20) /subsystem=core-management/service=configuration-changes:list-changes" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/migration_guide/server_configuration_changes
Chapter 10. Preventing monopolization of a replica in a multi-supplier replication topology
Chapter 10. Preventing monopolization of a replica in a multi-supplier replication topology In a multi-supplier replication topology, a supplier under heavy update load can monopolize a replica so that other suppliers are not able to update it as well. This section describes the circumstances when monopolization happens, how to identify this problem, and provides information on how to configure suppliers to avoid monopolization situations. 10.1. When monopolization happens One of the features of multi-supplier replication is that a supplier acquires exclusive access to a replica. If the supplier attempts to acquire access while being locked out, the replica sends back a busy response, and the supplier waits for the time set in the nsds5ReplicaBusyWaitTime parameter before it starts another attempt. In the meantime, the supplier sends its update to another replica. When the first replica is free again, the supplier sends the updates to this host. It can be a problem if the supplier that is locked out is under a heavy update load or has a lot of pending updates in the changelog. In this situation, the locking supplier finishes sending updates and immediately attempts to reacquire the same replica. Such an attempt succeeds in most cases, because other suppliers might still be waiting. You can set a pause between two update sessions in the nsds5ReplicaSessionPauseTime parameter. This can cause a single supplier to monopolize a replica for several hours or longer. 10.2. Enabling replication logging to identify monopolization of replicas If one or more suppliers are often under a heavy update load, and replicas frequently do not receive updates, enable logging of replication messages to identify monopolization situations. Prerequisites There are multiple suppliers in the replication topology. Procedure Enable replication logging: # dsconf -D "cn=Directory Manager" ldap://server.example.com config replace nsslapd-errorlog-level=8192 Note that this command enables only replication logging, and logging other error messages is disabled. Monitor the /var/log/dirsrv/slapd- instance_name /errors log file and search for the following error message: Replica Busy! Status: [Error (1) Replication error acquiring replica: replica busy] Note that it is normal if Directory Server occasionally logs this error. However, if replicas frequently do not receive updates, and the suppliers log this error, consider updating your configuration to solve this problem. 10.3. Configuring suppliers to avoid monopolization of replicas This procedure describes how to set parameters on a supplier to prevent monopolization of replicas. Due to the differences of environments and load, set only the parameters that are relevant in your situation, and adjust the values according to your environment. Prerequisites There are multiple suppliers in the replication topology. Directory Server frequently logs Replica Busy! Status: [Error (1) Replication error acquiring replica: replica busy] errors. Procedure Set the nsds5ReplicaBusyWaitTime parameter to configure the time a supplier waits before starting another attempt to acquire access to a replica after the replica sent a busy response: # dsconf -D "cn=Directory Manager" ldap://supplier.example.com repl-agmt set --suffix " dc=example,dc=com " --busy-wait-time 5 replication_agreement_name This command sets the time to wait to 5 seconds. This setting applies only to the specified replication agreement. Set the nsds5ReplicaSessionPauseTime parameter to configure the time a supplier waits between two update sessions: # dsconf -D "cn=Directory Manager" ldap://supplier.example.com repl-agmt set --suffix " dc=example,dc=com " --session-pause-time 15 replication_agreement_name This command sets the pause to 15 seconds. By default, nsds5ReplicaSessionPauseTime is one second longer than the value in nsds5ReplicaBusyWaitTime . This setting applies only to the specified replication agreement. Set the nsds5ReplicaReleaseTimeout parameter to terminate replication sessions after a given amount of time regardless of whether or not sending the update is complete: # dsconf -D "cn=Directory Manager" ldap://supplier.example.com replication set --suffix " dc=example,dc=com " --repl-release-timeout 90 This command sets the timeout to 90 seconds. This setting applies to all replication agreements for the specified suffix. Optional: Set a timeout period for a supplier so that it does not stay connected to a consumer infinitely attempting to send updates over a slow or broken connection: # dsconf -D "cn=Directory Manager" ldap://supplier.example.com repl-agmt set --conn-timeout 600 --suffix " dc=example,dc=com " replication_agreement_name This command sets the timeout to 600 seconds (10 minutes). To identify the optimum value, check the access log for the average amount of time the replication process takes, and set the timeout period accordingly. Additional resources Configuration and schema reference
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-errorlog-level=8192", "Replica Busy! Status: [Error (1) Replication error acquiring replica: replica busy]", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt set --suffix \" dc=example,dc=com \" --busy-wait-time 5 replication_agreement_name", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt set --suffix \" dc=example,dc=com \" --session-pause-time 15 replication_agreement_name", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com replication set --suffix \" dc=example,dc=com \" --repl-release-timeout 90", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt set --conn-timeout 600 --suffix \" dc=example,dc=com \" replication_agreement_name" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_and_managing_replication/assembly_preventing-monopolization-of-a-replica-in-a-multi-supplier-replication-topology_configuring-and-managing-replication
Chapter 3. Configuration fields
Chapter 3. Configuration fields This section describes the both required and optional configuration fields when deploying Red Hat Quay. 3.1. Required configuration fields The fields required to configure Red Hat Quay are covered in the following sections: General required fields link::https://access.redhat.com/documentation/en-us/red_hat_quay/3.9/html-single/configure_red_hat_quay/index#config-fields-storage[Storage for images] Database for metadata Redis for build logs and user events Tag expiration options 3.2. Automation options The following sections describe the available automation options for Red Hat Quay deployments: Pre-configuring Red Hat Quay for automation Using the API to create the first user 3.3. Optional configuration fields Optional fields for Red Hat Quay can be found in the following sections: Basic configuration SSL LDAP Repository mirroring Quota management Security scanner Helm Action log Build logs Dockerfile build OAuth Configuring nested repositories Adding other OCI media types to Quay Mail User Recaptcha ACI JWT App tokens Miscellaneous User interface v2 IPv6 configuration field Legacy options 3.4. General required fields The following table describes the required configuration fields for a Red Hat Quay deployment: Table 3.1. General required fields Field Type Description AUTHENTICATION_TYPE (Required) String The authentication engine to use for credential authentication. Values: One of Database , LDAP , JWT , Keystone , Default: Database PREFERRED_URL_SCHEME (Required) String The URL scheme to use when accessing Red Hat Quay. Values: One of http , https Default: http SERVER_HOSTNAME (Required) String The URL at which Red Hat Quay is accessible, without the scheme. Example: quay-server.example.com DATABASE_SECRET_KEY (Required) String Key used to encrypt sensitive fields within the database. This value should never be changed once set, otherwise all reliant fields, for example, repository mirror username and password configurations, are invalidated. SECRET_KEY (Required) String Key used to encrypt sensitive fields within the database and at run time. This value should never be changed once set, otherwise all reliant fields, for example, encrypted password credentials, are invalidated. SETUP_COMPLETE (Required) Boolean This is an artefact left over from earlier versions of the software and currently it must be specified with a value of true . 3.5. Database configuration This section describes the database configuration fields available for Red Hat Quay deployments. 3.5.1. Database URI With Red Hat Quay, connection to the database is configured by using the required DB_URI field. The following table describes the DB_URI configuration field: Table 3.2. Database URI Field Type Description DB_URI (Required) String The URI for accessing the database, including any credentials. Example DB_URI field: postgresql://quayuser:[email protected]:5432/quay 3.5.2. Database connection arguments Optional connection arguments are configured by the DB_CONNECTION_ARGS parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS are generic, while others are database specific. The following table describes database connection arguments: Table 3.3. Database connection arguments Field Type Description DB_CONNECTION_ARGS Object Optional connection arguments for the database, such as timeouts and SSL/TLS. .autorollback Boolean Whether to use thread-local connections. Should always be true .threadlocals Boolean Whether to use auto-rollback connections. Should always be true 3.5.2.1. PostgreSQL SSL/TLS connection arguments With SSL/TLS, configuration depends on the database you are deploying. The following example shows a PostgreSQL SSL/TLS configuration: DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert The sslmode option determines whether, or with, what priority a secure SSL/TLS TCP/IP connection will be negotiated with the server. There are six modes: Table 3.4. SSL/TLS options Mode Description disable Your configuration only tries non-SSL/TLS connections. allow Your configuration first tries a non-SSL/TLS connection. Upon failure, tries an SSL/TLS connection. prefer (Default) Your configuration first tries an SSL/TLS connection. Upon failure, tries a non-SSL/TLS connection. require Your configuration only tries an SSL/TLS connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. verify-ca Your configuration only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). verify-full Only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted CA and that the requested server hostname matches that in the certificate. For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions . 3.5.2.2. MySQL SSL/TLS connection arguments The following example shows a sample MySQL SSL/TLS configuration: DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert Information on the valid connection arguments for MySQL is available at Connecting to the Server Using URI-Like Strings or Key-Value Pairs . 3.6. Image storage This section details the image storage features and configuration fields that are available with Red Hat Quay. 3.6.1. Image storage features The following table describes the image storage features for Red Hat Quay: Table 3.5. Storage config features Field Type Description FEATURE_REPO_MIRROR Boolean If set to true, enables repository mirroring. Default: false FEATURE_PROXY_STORAGE Boolean Whether to proxy all direct download URLs in storage through NGINX. Default: false FEATURE_STORAGE_REPLICATION Boolean Whether to automatically replicate between storage engines. Default: false 3.6.2. Image storage configuration fields The following table describes the image storage configuration fields for Red Hat Quay: Table 3.6. Storage config fields Field Type Description DISTRIBUTED_STORAGE_CONFIG (Required) Object Configuration for storage engine(s) to use in Red Hat Quay. Each key represents an unique identifier for a storage engine. The value consists of a tuple of (key, value) forming an object describing the storage engine parameters. Default: [] DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS (Required) Array of string The list of storage engine(s) (by ID in DISTRIBUTED_STORAGE_CONFIG ) whose images should be fully replicated, by default, to all other storage engines. DISTRIBUTED_STORAGE_PREFERENCE (Required) Array of string The preferred storage engine(s) (by ID in DISTRIBUTED_STORAGE_CONFIG ) to use. A preferred engine means it is first checked for pulling and images are pushed to it. Default: false MAXIMUM_LAYER_SIZE String Maximum allowed size of an image layer. Pattern : ^[0-9]+(G|M)USD Example : 100G Default: 20G 3.6.3. Local storage The following YAML shows a sample configuration using local storage: DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default 3.6.4. OCS/NooBaa The following YAML shows a sample configuration using an Open Container Storage/NooBaa instance: DISTRIBUTED_STORAGE_CONFIG: rhocsStorage: - RHOCSStorage - access_key: access_key_here secret_key: secret_key_here bucket_name: quay-datastore-9b2108a3-29f5-43f2-a9d5-2872174f9a56 hostname: s3.openshift-storage.svc.cluster.local is_secure: 'true' port: '443' storage_path: /datastorage/registry 3.6.5. Ceph/RadosGW storage The following examples show two possible YAML configurations when using Ceph/RadosGW. Example A: Using RadosGW with the radosGWStorage driver DISTRIBUTED_STORAGE_CONFIG: radosGWStorage: - RadosGWStorage - access_key: <access_key_here> secret_key: <secret_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: true port: '443' storage_path: /datastorage/registry Example B: Using RadosGW with general s3 access DISTRIBUTED_STORAGE_CONFIG: s3Storage: 1 - RadosGWStorage - access_key: <access_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: true secret_key: <secret_key_here> storage_path: /datastorage/registry 1 Used for general s3 access. Note that general s3 access is not strictly limited to Amazon Web Services (AWS) 3, and can be used with RadosGW or other storage services. For an example of general s3 access using the AWS S3 driver, see "AWS S3 storage". 3.6.6. AWS S3 storage The following YAML shows a sample configuration using AWS S3 storage. DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage 1 - host: s3.us-east-2.amazonaws.com s3_access_key: ABCDEFGHIJKLMN s3_secret_key: OL3ABCDEFGHIJKLMN s3_bucket: quay_bucket storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - s3Storage 1 The S3Storage storage driver should only be used for AWS S3 buckets. Note that this differs from general S3 access, where the RadosGW driver or other storage services can be used. For an example, see "Example B: Using RadosGW with general S3 access". 3.6.7. Google Cloud Storage The following YAML shows a sample configuration using Google Cloud Storage: DISTRIBUTED_STORAGE_CONFIG: googleCloudStorage: - GoogleCloudStorage - access_key: GOOGQIMFB3ABCDEFGHIJKLMN bucket_name: quay-bucket secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - googleCloudStorage 3.6.8. Azure Storage The following YAML shows a sample configuration using Azure Storage: DISTRIBUTED_STORAGE_CONFIG: azureStorage: - AzureStorage - azure_account_name: azure_account_name_here azure_container: azure_container_here storage_path: /datastorage/registry azure_account_key: azure_account_key_here sas_token: some/path/ endpoint_url: https://[account-name].blob.core.usgovcloudapi.net 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - azureStorage 1 The endpoint_url parameter for Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, the endpoint_url will connect to the normal Azure region. As of Red Hat Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error: AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary . 3.6.9. Swift storage The following YAML shows a sample configuration using Swift storage: DISTRIBUTED_STORAGE_CONFIG: swiftStorage: - SwiftStorage - swift_user: swift_user_here swift_password: swift_password_here swift_container: swift_container_here auth_url: https://example.org/swift/v1/quay auth_version: 1 ca_cert_path: /conf/stack/swift.cert" storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - swiftStorage 3.6.10. Nutanix object storage The following YAML shows a sample configuration using Nutanix object storage. DISTRIBUTED_STORAGE_CONFIG: nutanixStorage: #storage config name - RadosGWStorage #actual driver - access_key: access_key_here #parameters secret_key: secret_key_here bucket_name: bucket_name_here hostname: hostname_here is_secure: 'true' port: '443' storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config - nutanixStorage 3.7. Redis configuration fields This section details the configuration fields available for Redis deployments. 3.7.1. Build logs The following build logs configuration fields are available for Redis deployments: Table 3.7. Build logs configuration Field Type Description BUILDLOGS_REDIS (Required) Object Redis connection details for build logs caching. .host (Required) String The hostname at which Redis is accessible. Example: quay-server.example.com .port (Required) Number The port at which Redis is accessible. Example: 6379 .password String The password to connect to the Redis instance. Example: strongpassword .ssl (Optional) Boolean Whether to enable TLS communication between Redis and Quay. Defaults to false. 3.7.2. User events The following user event fields are available for Redis deployments: Table 3.8. User events config Field Type Description USER_EVENTS_REDIS (Required) Object Redis connection details for user event handling. .host (Required) String The hostname at which Redis is accessible. Example: quay-server.example.com .port (Required) Number The port at which Redis is accessible. Example: 6379 .password String The password to connect to the Redis instance. Example: strongpassword .ssl Boolean Whether to enable TLS communication between Redis and Quay. Defaults to false. .ssl_keyfile (Optional) String The name of the key database file, which houses the client certificate to be used. Example: ssl_keyfile: /path/to/server/privatekey.pem .ssl_certfile (Optional) String Used for specifying the file path of the SSL certificate. Example: ssl_certfile: /path/to/server/certificate.pem .ssl_cert_reqs (Optional) String Used to specify the level of certificate validation to be performed during the SSL/TLS handshake. Example: ssl_cert_reqs: CERT_REQUIRED .ssl_ca_certs (Optional) String Used to specify the path to a file containing a list of trusted Certificate Authority (CA) certificates. Example: ssl_ca_certs: /path/to/ca_certs.pem .ssl_ca_data (Optional) String Used to specify a string containing the trusted CA certificates in PEM format. Example: ssl_ca_data: <certificate> .ssl_check_hostname (Optional) Boolean Used when setting up an SSL/TLS connection to a server. It specifies whether the client should check that the hostname in the server's SSL/TLS certificate matches the hostname of the server it is connecting to. Example: ssl_check_hostname: true 3.7.3. Example Redis configuration The following YAML shows a sample configuration using Redis with optional SSL/TLS fields: BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true ssl_*: <path_location_or_certificate> Note If your deployment uses Azure Cache for Redis and ssl is set to true , the port defaults to 6380 . 3.8. ModelCache configuration options The following options are available on Red Hat Quay for configuring ModelCache. 3.8.1. Memcache configuration option Memcache is the default ModelCache configuration option. With Memcache, no additional configuration is necessary. 3.8.2. Single Redis configuration option The following configuration is for a single Redis instance with optional read-only replicas: DATA_MODEL_CACHE_CONFIG: engine: redis redis_config: primary: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false > replica: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false > 3.8.3. Clustered Redis configuration option Use the following configuration for a clustered Redis instance: DATA_MODEL_CACHE_CONFIG: engine: rediscluster redis_config: startup_nodes: - host: <cluster-host> port: <port> password: <password if ssl: true> read_from_replicas: <true|false> skip_full_coverage_check: <true | false> ssl: <true | false > 3.9. Tag expiration configuration fields The following tag expiration configuration fields are available with Red Hat Quay: Table 3.9. Tag expiration configuration fields Field Type Description FEATURE_GARBAGE_COLLECTION Boolean Whether garbage collection of repositories is enabled. Default: True TAG_EXPIRATION_OPTIONS (Required) Array of string If enabled, the options that users can select for expiration of tags in their namespace. Pattern: ^[0-9]+(w|m|d|h|s)USD DEFAULT_TAG_EXPIRATION (Required) String The default, configurable tag expiration time for time machine. Pattern: ^[0-9]+(w|m|d|h|s)USD Default: 2w FEATURE_CHANGE_TAG_EXPIRATION Boolean Whether users and organizations are allowed to change the tag expiration for tags in their namespace. Default: True 3.9.1. Example tag expiration configuration The following YAML shows a sample tag expiration configuration: DEFAULT_TAG_EXPIRATION: 2w TAG_EXPIRATION_OPTIONS: - 0s - 1d - 1w - 2w - 4w 3.10. Quota management configuration fields Table 3.10. Quota management configuration Field Type Description FEATURE_QUOTA_MANAGEMENT Boolean Enables configuration, caching, and validation for quota management feature. DEFAULT_SYSTEM_REJECT_QUOTA_BYTES String Enables system default quota reject byte allowance for all organizations. By default, no limit is set. QUOTA_BACKFILL Boolean Enables the quota backfill worker to calculate the size of pre-existing blobs. Default : True QUOTA_TOTAL_DELAY_SECONDS String The time delay for starting the quota backfill. Rolling deployments can cause incorrect totals. This field must be set to a time longer than it takes for the rolling deployment to complete. Default : 1800 PERMANENTLY_DELETE_TAGS Boolean Enables functionality related to the removal of tags from the time machine window. Default : False RESET_CHILD_MANIFEST_EXPIRATION Boolean Resets the expirations of temporary tags targeting the child manifests. With this feature set to True , child manifests are immediately garbage collected. Default : False 3.10.1. Example quota management configuration The following YAML is the suggested configuration when enabling quota management. Quota management YAML configuration FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 RESET_CHILD_MANIFEST_EXPIRATION: true 3.11. Proxy cache configuration fields Table 3.11. Proxy configuration Field Type Description FEATURE_PROXY_CACHE Boolean Enables Red Hat Quay to act as a pull through cache for upstream registries. Default : false 3.12. Pre-configuring Red Hat Quay for automation Red Hat Quay supports several configuration options that enable automation. Users can configure these options before deployment to reduce the need for interaction with the user interface. 3.12.1. Allowing the API to create the first user To create the first user, users need to set the FEATURE_USER_INITIALIZE parameter to true and call the /api/v1/user/initialize API. Unlike all other registry API calls that require an OAuth token generated by an OAuth application in an existing organization, the API endpoint does not require authentication. Users can use the API to create a user such as quayadmin after deploying Red Hat Quay, provided no other users have been created. For more information, see Using the API to create the first user . 3.12.2. Enabling general API access Users should set the BROWSER_API_CALLS_XHR_ONLY config option to false to allow general access to the Red Hat Quay registry API. 3.12.3. Adding a superuser After deploying Red Hat Quay, users can create a user and give the first user administrator privileges with full permissions. Users can configure full permissions in advance by using the SUPER_USER configuration object. For example: ... SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin ... 3.12.4. Restricting user creation After you have configured a superuser, you can restrict the ability to create new users to the superuser group by setting the FEATURE_USER_CREATION to false . For example: ... FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false ... 3.12.5. Enabling new functionality in Red Hat Quay 3.8 To use new Red Hat Quay 3.8 functions, enable some or all of the following features: ... FEATURE_UI_V2: true FEATURE_LISTEN_IP_VERSION: FEATURE_SUPERUSERS_FULL_ACCESS: true GLOBAL_READONLY_SUPER_USERS: - FEATURE_RESTRICTED_USERS: true RESTRICTED_USERS_WHITELIST: - ... 3.12.6. Enabling new functionality in Red Hat Quay 3.7 To use new Red Hat Quay 3.7 functions, enable some or all of the following features: ... FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: true FEATURE_PROXY_CACHE: true FEATURE_STORAGE_REPLICATION: true DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000 ... 3.12.7. Suggested configuration for automation The following config.yaml parameters are suggested for automation: ... FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false ... 3.12.8. Deploying the Red Hat Quay Operator using the initial configuration Use the following procedure to deploy Red Hat Quay on OpenShift Container Platform using the initial configuration. Prerequisites You have installed the oc CLI. Procedure Create a secret using the configuration file: USD oc create secret generic -n quay-enterprise --from-file config.yaml=./config.yaml init-config-bundle-secret Create a quayregistry.yaml file. Identify the unmanaged components and reference the created secret, for example: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret Deploy the Red Hat Quay registry: USD oc create -n quay-enterprise -f quayregistry.yaml Steps Using the API to create the first user 3.12.8.1. Using the API to create the first user Use the following procedure to create the first user in your Red Hat Quay organization. Prerequisites The config option FEATURE_USER_INITIALIZE must be set to true . No users can already exist in the database. Procedure This procedure requests an OAuth token by specifying "access_token": true . As the root user, install python39 by entering the following command: USD sudo yum install python39 Upgrade the pip package manager for Python 3.9: USD python3.9 -m pip install --upgrade pip Use the pip package manager to install the bcrypt package: USD pip install bcrypt Generate a secure, hashed password using the bcrypt package in Python 3.9 by entering the following command: USD python3.9 -c 'import bcrypt; print(bcrypt.hashpw(b"subquay12345", bcrypt.gensalt(12)).decode("utf-8"))' Open your Red Hat Quay configuration file and update the following configuration fields: FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin Stop the Red Hat Quay service by entering the following command: USD sudo podman stop quay Start the Red Hat Quay service by entering the following command: USD sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv} Run the following CURL command to generate a new user with a username, password, email, and access token: USD curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayadmin", "password":"quaypass12345", "email": "[email protected]", "access_token": true}' If successful, the command returns an object with the username, email, and encrypted password. For example: {"access_token":"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED", "email":"[email protected]","encrypted_password":"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW","username":"quayadmin"} # gitleaks:allow If a user already exists in the database, an error is returned: {"message":"Cannot initialize user in a non-empty database"} If your password is not at least eight characters or contains whitespace, an error is returned: {"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."} Log in to your Red Hat Quay deployment by entering the following command: USD sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false Example output Login Succeeded! 3.12.8.2. Using the OAuth token After invoking the API, you can call out the rest of the Red Hat Quay API by specifying the returned OAuth code. Prerequisites You have invoked the /api/v1/user/initialize API, and passed in the username, password, and email address. Procedure Obtain the list of current users by entering the following command: USD curl -X GET -k -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/superuser/users/ Example output: { "users": [ { "kind": "user", "name": "quayadmin", "username": "quayadmin", "email": "[email protected]", "verified": true, "avatar": { "name": "quayadmin", "hash": "3e82e9cbf62d25dec0ed1b4c66ca7c5d47ab9f1f271958298dea856fb26adc4c", "color": "#e7ba52", "kind": "user" }, "super_user": true, "enabled": true } ] } In this instance, the details for the quayadmin user are returned as it is the only user that has been created so far. 3.12.8.3. Using the API to create an organization The following procedure details how to use the API to create a Red Hat Quay organization. Prerequisites You have invoked the /api/v1/user/initialize API, and passed in the username, password, and email address. You have called out the rest of the Red Hat Quay API by specifying the returned OAuth code. Procedure To create an organization, use a POST call to api/v1/organization/ endpoint: USD curl -X POST -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/ --data '{"name": "testorg", "email": "[email protected]"}' Example output: "Created" You can retrieve the details of the organization you created by entering the following command: USD curl -X GET -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://min-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg Example output: { "name": "testorg", "email": "[email protected]", "avatar": { "name": "testorg", "hash": "5f113632ad532fc78215c9258a4fb60606d1fa386c91b141116a1317bf9c53c8", "color": "#a55194", "kind": "user" }, "is_admin": true, "is_member": true, "teams": { "owners": { "name": "owners", "description": "", "role": "admin", "avatar": { "name": "owners", "hash": "6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90", "color": "#c7c7c7", "kind": "team" }, "can_view": true, "repo_count": 0, "member_count": 1, "is_synced": false } }, "ordered_teams": [ "owners" ], "invoice_email": false, "invoice_email_address": null, "tag_expiration_s": 1209600, "is_free_account": true } 3.13. Basic configuration fields Table 3.12. Basic configuration Field Type Description REGISTRY_TITLE String If specified, the long-form title for the registry. Displayed in frontend of your Red Hat Quay deployment, for example, at the sign in page of your organization. Should not exceed 35 characters. Default: Red Hat Quay REGISTRY_TITLE_SHORT String If specified, the short-form title for the registry. Title is displayed on various pages of your organization, for example, as the title of the tutorial on your organization's Tutorial page. Default: Red Hat Quay CONTACT_INFO Array of String If specified, contact information to display on the contact page. If only a single piece of contact information is specified, the contact footer will link directly. [0] String Adds a link to send an e-mail. Pattern: ^mailto:(.)+USD Example: mailto:[email protected] [1] String Adds a link to visit an IRC chat room. Pattern: ^irc://(.)+USD Example: irc://chat.freenode.net:6665/quay [2] String Adds a link to call a phone number.+ Pattern: ^tel:(.)+USD Example: tel:+1-888-930-3475 [3] String Adds a link to a defined URL. Pattern: ^http(s)?://(.)+USD Example: https://twitter.com/quayio 3.14. SSL configuration fields Table 3.13. SSL configuration Field Type Description PREFERRED_URL_SCHEME String One of http or https . Note that users only set their PREFERRED_URL_SCHEME to http when there is no TLS encryption in the communication path from the client to Quay. + Users must set their PREFERRED_URL_SCHEME`to `https when using a TLS-terminating load balancer, a reverse proxy (for example, Nginx), or when using Quay with custom SSL certificates directly. In most cases, the PREFERRED_URL_SCHEME should be https . Default: http SERVER_HOSTNAME (Required) String The URL at which Red Hat Quay is accessible, without the scheme Example: quay-server.example.com SSL_CIPHERS Array of String If specified, the nginx-defined list of SSL ciphers to enabled and disabled Example: [ CAMELLIA , !3DES ] SSL_PROTOCOLS Array of String If specified, nginx is configured to enabled a list of SSL protocols defined in the list. Removing an SSL protocol from the list disables the protocol during Red Hat Quay startup. Example: ['TLSv1','TLSv1.1','TLSv1.2', `TLSv1.3 ]` SESSION_COOKIE_SECURE Boolean Whether the secure property should be set on session cookies Default: False Recommendation: Set to True for all installations using SSL 3.14.1. Configuring SSL Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively: Edit the config.yaml file and specify that you want Quay to handle TLS: config.yaml ... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ... Stop the Quay container and restart the registry 3.15. Adding TLS Certificates to the Red Hat Quay Container To add custom TLS certificates to Red Hat Quay, create a new directory named extra_ca_certs/ beneath the Red Hat Quay config directory. Copy any required site-specific TLS certificates to this new directory. 3.15.1. Add TLS certificates to Red Hat Quay View certificate to be added to the container Create certs directory and copy certificate there Obtain the Quay container's CONTAINER ID with podman ps : Restart the container with that ID: Examine the certificate copied into the container namespace: 3.16. LDAP configuration fields Table 3.14. LDAP configuration Field Type Description AUTHENTICATION_TYPE (Required) String Must be set to LDAP . FEATURE_TEAM_SYNCING Boolean Whether to allow for team membership to be synced from a backing group in the authentication engine (LDAP or Keystone). Default: true FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP Boolean If enabled, non-superusers can setup syncing on teams using LDAP. Default: false LDAP_ADMIN_DN String The admin DN for LDAP authentication. LDAP_ADMIN_PASSWD String The admin password for LDAP authentication. LDAP_ALLOW_INSECURE_FALLBACK Boolean Whether or not to allow SSL insecure fallback for LDAP authentication. LDAP_BASE_DN Array of String The base DN for LDAP authentication. LDAP_EMAIL_ATTR String The email attribute for LDAP authentication. LDAP_UID_ATTR String The uid attribute for LDAP authentication. LDAP_URI String The LDAP URI. LDAP_USER_FILTER String The user filter for LDAP authentication. LDAP_USER_RDN Array of String The user RDN for LDAP authentication. TEAM_RESYNC_STALE_TIME String If team syncing is enabled for a team, how often to check its membership and resync if necessary. Pattern: ^[0-9]+(w|m|d|h|s)USD Example: 2h Default: 30m LDAP_SUPERUSER_FILTER String Subset of the LDAP_USER_FILTER configuration field. When configured, allows Red Hat Quay administrators the ability to configure Lightweight Directory Access Protocol (LDAP) users as superusers when Red Hat Quay uses LDAP as its authentication provider. With this field, administrators can add or remove superusers without having to update the Red Hat Quay configuration file and restart their deployment. This field requires that your AUTHENTICATION_TYPE is set to LDAP . LDAP_RESTRICTED_USER_FILTER String Subset of the LDAP_USER_FILTER configuration field. When configured, allows Red Hat Quay administrators the ability to configure Lightweight Directory Access Protocol (LDAP) users as restricted users when Red Hat Quay uses LDAP as its authentication provider. This field requires that your AUTHENTICATION_TYPE is set to LDAP . LDAP_TIMEOUT Integer Determines the maximum time period. in seconds, allowed for establishing a connection to the Lightweight Directory Access Protocol (LDAP) server. + Default: 10 LDAP_NETWORK_TIMEOUT Integer Defines the maximum time duration, in seconds, that Red Hat Quay waits for a response from the Lightweight Directory Access Protocol (LDAP) server during network operations. + Default: 10 3.16.1. LDAP configuration references Use the following references to update your config.yaml file with the desired configuration field. 3.16.1.1. Basic LDAP configuration --- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldaps://<ldap_url_domain_name> LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com 3.16.1.2. LDAP restricted user configuration --- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com --- 3.16.1.3. LDAP superuser configuration reference --- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com 3.17. Mirroring configuration fields Table 3.15. Mirroring configuration Field Type Description FEATURE_REPO_MIRROR Boolean Enable or disable repository mirroring Default: false REPO_MIRROR_INTERVAL Number The number of seconds between checking for repository mirror candidates Default: 30 REPO_MIRROR_SERVER_HOSTNAME String Replaces the SERVER_HOSTNAME as the destination for mirroring. Default: None Example : openshift-quay-service REPO_MIRROR_TLS_VERIFY Boolean Require HTTPS and verify certificates of Quay registry during mirror. Default: false REPO_MIRROR_ROLLBACK Boolean When set to true , the repository rolls back after a failed mirror attempt. Default : false 3.18. Security scanner configuration fields Table 3.16. Security scanner configuration Field Type Description FEATURE_SECURITY_SCANNER Boolean Enable or disable the security scanner Default: false FEATURE_SECURITY_NOTIFICATIONS Boolean If the security scanner is enabled, turn on or turn off security notifications Default: false SECURITY_SCANNER_V4_REINDEX_THRESHOLD String This parameter is used to determine the minimum time, in seconds, to wait before re-indexing a manifest that has either previously failed or has changed states since the last indexing. The data is calculated from the last_indexed datetime in the manifestsecuritystatus table. This parameter is used to avoid trying to re-index every failed manifest on every indexing run. The default time to re-index is 300 seconds. SECURITY_SCANNER_V4_ENDPOINT String The endpoint for the V4 security scanner Pattern: ^http(s)?://(.)+USD Example: http://192.168.99.101:6060 SECURITY_SCANNER_V4_PSK String The generated pre-shared key (PSK) for Clair SECURITY_SCANNER_ENDPOINT String The endpoint for the V2 security scanner Pattern: ^http(s)?://(.)+USD Example: http://192.168.99.100:6060 SECURITY_SCANNER_INDEXING_INTERVAL Integer This parameter is used to determine the number of seconds between indexing intervals in the security scanner. When indexing is triggered, Red Hat Quay will query its database for manifests that must be indexed by Clair. These include manifests that have not yet been indexed and manifests that previously failed indexing. + Default: 30 FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX Boolean Whether to allow sending notifications about vulnerabilities for new pushes. + Default *: True 3.18.1. Re-indexing with Clair v4 When Clair v4 indexes a manifest, the result should be deterministic. For example, the same manifest should produce the same index report. This is true until the scanners are changed, as using different scanners will produce different information relating to a specific manifest to be returned in the report. Because of this, Clair v4 exposes a state representation of the indexing engine ( /indexer/api/v1/index_state ) to determine whether the scanner configuration has been changed. Red Hat Quay leverages this index state by saving it to the index report when parsing to Quay's database. If this state has changed since the manifest was previously scanned, Red Hat Quay will attempt to re-index that manifest during the periodic indexing process. By default this parameter is set to 30 seconds. Users might decrease the time if they want the indexing process to run more frequently, for example, if they did not want to wait 30 seconds to see security scan results in the UI after pushing a new tag. Users can also change the parameter if they want more control over the request pattern to Clair and the pattern of database operations being performed on the Red Hat Quay database. 3.18.2. Example security scanner configuration The following YAML is the suggested configuration when enabling the security scanner feature. Security scanner YAML configuration FEATURE_SECURITY_NOTIFICATIONS: true FEATURE_SECURITY_SCANNER: true FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX: true ... SECURITY_SCANNER_INDEXING_INTERVAL: 30 SECURITY_SCANNER_V4_ENDPOINT: http://quay-server.example.com:8081 SECURITY_SCANNER_V4_PSK: MTU5YzA4Y2ZkNzJoMQ== SERVER_HOSTNAME: quay-server.example.com ... 3.19. Helm configuration fields Table 3.17. Helm configuration fields Field Type Description FEATURE_GENERAL_OCI_SUPPORT Boolean Enable support for OCI artifacts. Default: True The following Open Container Initiative (OCI) artifact types are built into Red Hat Quay by default and are enabled through the FEATURE_GENERAL_OCI_SUPPORT configuration field: Field Media Type Supported content types Helm application/vnd.cncf.helm.config.v1+json application/tar+gzip , application/vnd.cncf.helm.chart.content.v1.tar+gzip Cosign application/vnd.oci.image.config.v1+json application/vnd.dev.cosign.simplesigning.v1+json , application/vnd.dsse.envelope.v1+json SPDX application/vnd.oci.image.config.v1+json text/spdx , text/spdx+xml , text/spdx+json Syft application/vnd.oci.image.config.v1+json application/vnd.syft+json CycloneDX application/vnd.oci.image.config.v1+json application/vnd.cyclonedx , application/vnd.cyclonedx+xml , application/vnd.cyclonedx+json In-toto application/vnd.oci.image.config.v1+json application/vnd.in-toto+json Unknown application/vnd.cncf.openpolicyagent.policy.layer.v1+rego application/vnd.cncf.openpolicyagent.policy.layer.v1+rego , application/vnd.cncf.openpolicyagent.data.layer.v1+json 3.19.1. Configuring Helm The following YAML is the example configuration when enabling Helm. Helm YAML configuration FEATURE_GENERAL_OCI_SUPPORT: true 3.20. Open Container Initiative configuration fields Table 3.18. Additional OCI artifact configuration field Field Type Description ALLOWED_OCI_ARTIFACT_TYPES Object The set of allowed OCI artifact mimetypes and the associated layer types. 3.20.1. Configuring additional artifact types Other OCI artifact types that are not supported by default can be added to your Red Hat Quay deployment by using the ALLOWED_OCI_ARTIFACT_TYPES configuration field. Use the following reference to add additional OCI artifact types: OCI artifact types configuration FEATURE_GENERAL_OCI_SUPPORT: true ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4> For example, you can add Singularity (SIF) support by adding the following to your config.yaml file: Example OCI artifact type configuration ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar Note When adding OCI artifact types that are not configured by default, Red Hat Quay administrators will also need to manually add support for cosign and Helm if desired. 3.21. Unknown media types Table 3.19. Unknown media types configuration field Field Type Description IGNORE_UNKNOWN_MEDIATYPES Boolean When enabled, allows a container registry platform to disregard specific restrictions on supported artifact types and accept any unrecognized or unknown media types. Default: false 3.21.1. Configuring unknown media types The following YAML is the example configuration when enabling unknown or unrecognized media types. Unknown media types YAML configuration IGNORE_UNKNOWN_MEDIATYPES: true 3.22. Action log configuration fields 3.22.1. Action log storage configuration Table 3.20. Action log storage configuration Field Type Description FEATURE_LOG_EXPORT Boolean Whether to allow exporting of action logs. Default: True LOGS_MODEL String Specifies the preferred method for handling log data. Values: One of database , transition_reads_both_writes_es , elasticsearch , splunk Default: database LOGS_MODEL_CONFIG Object Logs model config for action logs. LOGS_MODEL_CONFIG [object]: Logs model config for action logs. elasticsearch_config [object]: Elasticsearch cluster configuration. access_key [string]: Elasticsearch user (or IAM key for AWS ES). Example : some_string host [string]: Elasticsearch cluster endpoint. Example : host.elasticsearch.example index_prefix [string]: Elasticsearch's index prefix. Example : logentry_ index_settings [object]: Elasticsearch's index settings use_ssl [boolean]: Use ssl for Elasticsearch. Defaults to True . Example : True secret_key [string]: Elasticsearch password (or IAM secret for AWS ES). Example : some_secret_string aws_region [string]: Amazon web service region. Example : us-east-1 port [number]: Elasticsearch cluster endpoint port. Example : 1234 kinesis_stream_config [object]: AWS Kinesis Stream configuration. aws_secret_key [string]: AWS secret key. Example : some_secret_key stream_name [string]: Kinesis stream to send action logs to. Example : logentry-kinesis-stream aws_access_key [string]: AWS access key. Example : some_access_key retries [number]: Max number of attempts made on a single request. Example : 5 read_timeout [number]: Number of seconds before timeout when reading from a connection. Example : 5 max_pool_connections [number]: The maximum number of connections to keep in a connection pool. Example : 10 aws_region [string]: AWS region. Example : us-east-1 connect_timeout [number]: Number of seconds before timeout when attempting to make a connection. Example : 5 producer [string]: Logs producer if logging to Elasticsearch. enum : kafka, elasticsearch, kinesis_stream Example : kafka kafka_config [object]: Kafka cluster configuration. topic [string]: Kafka topic to publish log entries to. Example : logentry bootstrap_servers [array]: List of Kafka brokers to bootstrap the client from. max_block_seconds [number]: Max number of seconds to block during a send() , either because the buffer is full or metadata unavailable. Example : 10 producer [string]: splunk splunk_config [object]: Logs model configuration for Splunk action logs or the Splunk cluster configuration. host [string]: Splunk cluster endpoint. port [integer]: Splunk management cluster endpoint port. bearer_token [string]: The bearer token for Splunk. verify_ssl [boolean]: Enable ( True ) or disable ( False ) TLS/SSL verification for HTTPS connections. index_prefix [string]: Splunk's index prefix. ssl_ca_path [string]: The relative container path to a single .pem file containing a certificate authority (CA) for SSL validation. 3.22.2. Action log rotation and archiving configuration Table 3.21. Action log rotation and archiving configuration Field Type Description FEATURE_ACTION_LOG_ROTATION Boolean Enabling log rotation and archival will move all logs older than 30 days to storage. Default: false ACTION_LOG_ARCHIVE_LOCATION String If action log archiving is enabled, the storage engine in which to place the archived data. Example: : s3_us_east ACTION_LOG_ARCHIVE_PATH String If action log archiving is enabled, the path in storage in which to place the archived data. Example: archives/actionlogs ACTION_LOG_ROTATION_THRESHOLD String The time interval after which to rotate logs. Example: 30d 3.22.3. Action log audit configuration Table 3.22. Audit logs configuration field Field Type Description ACTION_LOG_AUDIT_LOGINS Boolean When set to True , tracks advanced events such as logging into, and out of, the UI, and logging in using Docker for regular users, robot accounts, and for application-specific token accounts. Default: True 3.23. Build logs configuration fields Table 3.23. Build logs configuration fields Field Type Description FEATURE_READER_BUILD_LOGS Boolean If set to true, build logs can be read by those with read access to the repository, rather than only write access or admin access. Default: False LOG_ARCHIVE_LOCATION String The storage location, defined in DISTRIBUTED_STORAGE_CONFIG , in which to place the archived build logs. Example: s3_us_east LOG_ARCHIVE_PATH String The path under the configured storage engine in which to place the archived build logs in .JSON format. Example: archives/buildlogs 3.24. Dockerfile build triggers fields Table 3.24. Dockerfile build support Field Type Description FEATURE_BUILD_SUPPORT Boolean Whether to support Dockerfile build. Default: False SUCCESSIVE_TRIGGER_FAILURE_DISABLE_THRESHOLD Number If not set to None , the number of successive failures that can occur before a build trigger is automatically disabled. Default: 100 SUCCESSIVE_TRIGGER_INTERNAL_ERROR_DISABLE_THRESHOLD Number If not set to None , the number of successive internal errors that can occur before a build trigger is automatically disabled Default: 5 3.24.1. GitHub build triggers Table 3.25. GitHub build triggers Field Type Description FEATURE_GITHUB_BUILD Boolean Whether to support GitHub build triggers. Default: False GITHUB_TRIGGER_CONFIG Object Configuration for using GitHub Enterprise for build triggers. .GITHUB_ENDPOINT (Required) String The endpoint for GitHub Enterprise. Example: https://github.com/ .API_ENDPOINT String The endpoint of the GitHub Enterprise API to use. Must be overridden for github.com . Example : https://api.github.com/ .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance; this cannot be shared with GITHUB_LOGIN_CONFIG . .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. 3.24.2. BitBucket build triggers Table 3.26. BitBucket build triggers Field Type Description FEATURE_BITBUCKET_BUILD Boolean Whether to support Bitbucket build triggers. Default: False BITBUCKET_TRIGGER_CONFIG Object Configuration for using BitBucket for build triggers. .CONSUMER_KEY (Required) String The registered consumer key (client ID) for this Red Hat Quay instance. .CONSUMER_SECRET (Required) String The registered consumer secret (client secret) for this Red Hat Quay instance. 3.24.3. GitLab build triggers Table 3.27. GitLab build triggers Field Type Description FEATURE_GITLAB_BUILD Boolean Whether to support GitLab build triggers. Default: False GITLAB_TRIGGER_CONFIG Object Configuration for using Gitlab for build triggers. .GITLAB_ENDPOINT (Required) String The endpoint at which Gitlab Enterprise is running. .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance. .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. 3.25. Build manager configuration fields Table 3.28. Build manager configuration fields Field Type Description ALLOWED_WORKER_COUNT String Defines how many Build Workers are instantiated per Red Hat Quay pod. Typically set to 1 . ORCHESTRATOR_PREFIX String Defines a unique prefix to be added to all Redis keys. This is useful to isolate Orchestrator values from other Redis keys. REDIS_HOST Object The hostname for your Redis service. REDIS_PASSWORD String The password to authenticate into your Redis service. REDIS_SSL Boolean Defines whether or not your Redis connection uses SSL/TLS. REDIS_SKIP_KEYSPACE_EVENT_SETUP Boolean By default, Red Hat Quay does not set up the keyspace events required for key events at runtime. To do so, set REDIS_SKIP_KEYSPACE_EVENT_SETUP to false . EXECUTOR String Starts a definition of an Executor of this type. Valid values are kubernetes and ec2 . BUILDER_NAMESPACE String Kubernetes namespace where Red Hat Quay Builds will take place. K8S_API_SERVER Object Hostname for API Server of the OpenShift Container Platform cluster where Builds will take place. K8S_API_TLS_CA Object The filepath in the Quay container of the Build cluster's CA certificate for the Quay application to trust when making API calls. KUBERNETES_DISTRIBUTION String Indicates which type of Kubernetes is being used. Valid values are openshift and k8s . CONTAINER_ * Object Define the resource requests and limits for each build pod. NODE_SELECTOR_ * Object Defines the node selector label name-value pair where build Pods should be scheduled. CONTAINER_RUNTIME Object Specifies whether the Builder should run docker or podman . Customers using Red Hat's quay-builder image should set this to podman . SERVICE_ACCOUNT_NAME/SERVICE_ACCOUNT_TOKEN Object Defines the Service Account name or token that will be used by build pods. QUAY_USERNAME/QUAY_PASSWORD Object Defines the registry credentials needed to pull the Red Hat Quay build worker image that is specified in the WORKER_IMAGE field. Customers should provide a Red Hat Service Account credential as defined in the section "Creating Registry Service Accounts" against registry.redhat.io in the article at https://access.redhat.com/RegistryAuthentication . WORKER_IMAGE Object Image reference for the Red Hat Quay Builder image. registry.redhat.io/quay/quay-builder WORKER_TAG Object Tag for the Builder image desired. The latest version is 3.9. BUILDER_VM_CONTAINER_IMAGE Object The full reference to the container image holding the internal VM needed to run each Red Hat Quay Build. ( registry.redhat.io/quay/quay-builder-qemu-rhcos:3.9 ). SETUP_TIME String Specifies the number of seconds at which a Build times out if it has not yet registered itself with the Build Manager. Defaults at 500 seconds. Builds that time out are attempted to be restarted three times. If the Build does not register itself after three attempts it is considered failed. MINIMUM_RETRY_THRESHOLD String This setting is used with multiple Executors. It indicates how many retries are attempted to start a Build before a different Executor is chosen. Setting to 0 means there are no restrictions on how many tries the build job needs to have. This value should be kept intentionally small (three or less) to ensure failovers happen quickly during infrastructure failures. You must specify a value for this setting. For example, Kubernetes is set as the first executor and EC2 as the second executor. If you want the last attempt to run a job to always be executed on EC2 and not Kubernetes, you can set the Kubernetes executor's MINIMUM_RETRY_THRESHOLD to 1 and EC2's MINIMUM_RETRY_THRESHOLD to 0 (defaults to 0 if not set). In this case, the Kubernetes' MINIMUM_RETRY_THRESHOLD retries_remaining(1) would evaluate to False , therefore falling back to the second executor configured. SSH_AUTHORIZED_KEYS Object List of SSH keys to bootstrap in the ignition config. This allows other keys to be used to SSH into the EC2 instance or QEMU virtual machine (VM). 3.26. OAuth configuration fields Table 3.29. OAuth fields Field Type Description DIRECT_OAUTH_CLIENTID_WHITELIST Array of String A list of client IDs for Quay-managed applications that are allowed to perform direct OAuth approval without user approval. 3.26.1. GitHub OAuth configuration fields Table 3.30. GitHub OAuth fields Field Type Description FEATURE_GITHUB_LOGIN Boolean Whether GitHub login is supported **Default: False GITHUB_LOGIN_CONFIG Object Configuration for using GitHub (Enterprise) as an external login provider. .ALLOWED_ORGANIZATIONS Array of String The names of the GitHub (Enterprise) organizations whitelisted to work with the ORG_RESTRICT option. .API_ENDPOINT String The endpoint of the GitHub (Enterprise) API to use. Must be overridden for github.com Example: https://api.github.com/ .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance; cannot be shared with GITHUB_TRIGGER_CONFIG . Example: 0e8dbe15c4c7630b6780 .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. Example: e4a58ddd3d7408b7aec109e85564a0d153d3e846 .GITHUB_ENDPOINT (Required) String The endpoint for GitHub (Enterprise). Example : https://github.com/ .ORG_RESTRICT Boolean If true, only users within the organization whitelist can login using this provider. 3.26.2. Google OAuth configuration fields Table 3.31. Google OAuth fields Field Type Description FEATURE_GOOGLE_LOGIN Boolean Whether Google login is supported. **Default: False GOOGLE_LOGIN_CONFIG Object Configuration for using Google for external authentication. .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance. Example: 0e8dbe15c4c7630b6780 .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. Example: e4a58ddd3d7408b7aec109e85564a0d153d3e846 3.27. OIDC configuration fields Table 3.32. OIDC fields Field Type Description <string>_LOGIN_CONFIG (Required) String The parent key that holds the OIDC configuration settings. Typically the name of the OIDC provider, for example, AZURE_LOGIN_CONFIG , however any arbitrary string is accepted. .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance. Example: 0e8dbe15c4c7630b6780 .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. Example: e4a58ddd3d7408b7aec109e85564a0d153d3e846 .DEBUGLOG Boolean Whether to enable debugging. .LOGIN_BINDING_FIELD String Used when the internal authorization is set to LDAP. Red Hat Quay reads this parameter and tries to search through the LDAP tree for the user with this username. If it exists, it automatically creates a link to that LDAP account. .LOGIN_SCOPES Object Adds additional scopes that Red Hat Quay uses to communicate with the OIDC provider. .OIDC_ENDPOINT_CUSTOM_PARAMS String Support for custom query parameters on OIDC endpoints. The following endpoints are supported: authorization_endpoint , token_endpoint , and user_endpoint . .OIDC_ISSUER String Allows the user to define the issuer to verify. For example, JWT tokens container a parameter known as iss which defines who issued the token. By default, this is read from the .well-know/openid/configuration endpoint, which is exposed by every OIDC provider. If this verification fails, there is no login. .OIDC_SERVER (Required) String The address of the OIDC server that is being used for authentication. Example: https://sts.windows.net/6c878... / .PREFERRED_USERNAME_CLAIM_NAME String Sets the preferred username to a parameter from the token. .SERVICE_ICON String Changes the icon on the login screen. .SERVICE_NAME (Required) String The name of the service that is being authenticated. Example: Azure AD .VERIFIED_EMAIL_CLAIM_NAME String The name of the claim that is used to verify the email address of the user. 3.27.1. OIDC configuration The following example shows a sample OIDC configuration. Example OIDC configuration AZURE_LOGIN_CONFIG: CLIENT_ID: <client_id> CLIENT_SECRET: <client_secret> OIDC_SERVER: <oidc_server_address_> DEBUGGING: true SERVICE_NAME: Azure AD VERIFIED_EMAIL_CLAIM_NAME: <verified_email> OIDC_ENDPOINT_CUSTOM_PARAMS": "authorization_endpoint": "some": "param", 3.28. Nested repositories configuration fields Support for nested repository path names has been added under the FEATURE_EXTENDED_REPOSITORY_NAMES property. This optional configuration is added to the config.yaml by default. Enablement allows the use of / in repository names. Table 3.33. OCI and nested repositories configuration fields Field Type Description FEATURE_EXTENDED_REPOSITORY_NAMES Boolean Enable support for nested repositories Default: True OCI and nested repositories configuration example FEATURE_EXTENDED_REPOSITORY_NAMES: true 3.29. QuayIntegration configuration fields The following configuration fields are available for the QuayIntegration custom resource: Name Description Schema allowlistNamespaces (Optional) A list of namespaces to include. Array clusterID (Required) The ID associated with this cluster. String credentialsSecret.key (Required) The secret containing credentials to communicate with the Quay registry. Object denylistNamespaces (Optional) A list of namespaces to exclude. Array insecureRegistry (Optional) Whether to skip TLS verification to the Quay registry Boolean quayHostname (Required) The hostname of the Quay registry. String scheduledImageStreamImport (Optional) Whether to enable image stream importing. Boolean 3.30. Mail configuration fields Table 3.34. Mail configuration fields Field Type Description FEATURE_MAILING Boolean Whether emails are enabled Default: False MAIL_DEFAULT_SENDER String If specified, the e-mail address used as the from when Red Hat Quay sends e-mails. If none, defaults to [email protected] Example: [email protected] MAIL_PASSWORD String The SMTP password to use when sending e-mails MAIL_PORT Number The SMTP port to use. If not specified, defaults to 587. MAIL_SERVER String The SMTP server to use for sending e-mails. Only required if FEATURE_MAILING is set to true. Example: smtp.example.com MAIL_USERNAME String The SMTP username to use when sending e-mails MAIL_USE_TLS Boolean If specified, whether to use TLS for sending e-mails Default: True 3.31. User configuration fields Table 3.35. User configuration fields Field Type Description FEATURE_SUPER_USERS Boolean Whether superusers are supported Default: true FEATURE_USER_CREATION Boolean Whether users can be created (by non-superusers) Default: true FEATURE_USER_LAST_ACCESSED Boolean Whether to record the last time a user was accessed Default: true FEATURE_USER_LOG_ACCESS Boolean If set to true, users will have access to audit logs for their namespace Default: false FEATURE_USER_METADATA Boolean Whether to collect and support user metadata Default: false FEATURE_USERNAME_CONFIRMATION Boolean If set to true, users can confirm and modify their initial usernames when logging in via OpenID Connect (OIDC) or a non-database internal authentication provider like LDAP. Default: true FEATURE_USER_RENAME Boolean If set to true, users can rename their own namespace Default: false FEATURE_INVITE_ONLY_USER_CREATION Boolean Whether users being created must be invited by another user Default: false FRESH_LOGIN_TIMEOUT String The time after which a fresh login requires users to re-enter their password Example : 5m USERFILES_LOCATION String ID of the storage engine in which to place user-uploaded files Example : s3_us_east USERFILES_PATH String Path under storage in which to place user-uploaded files Example : userfiles USER_RECOVERY_TOKEN_LIFETIME String The length of time a token for recovering a user accounts is valid Pattern : ^[0-9]+(w|m|d|h|s)USD Default : 30m FEATURE_SUPERUSERS_FULL_ACCESS Boolean Grants superusers the ability to read, write, and delete content from other repositories in namespaces that they do not own or have explicit permissions for. Default: False FEATURE_SUPERUSERS_ORG_CREATION_ONLY Boolean Whether to only allow superusers to create organizations. Default: False FEATURE_RESTRICTED_USERS Boolean When set with RESTRICTED_USERS_WHITELIST , restricted users cannot create organizations or content in their own namespace. Normal permissions apply for an organization's membership, for example, a restricted user will still have normal permissions in organizations based on the teams that they are members of. Default: False RESTRICTED_USERS_WHITELIST String When set with FEATURE_RESTRICTED_USERS: true , specific users are excluded from the FEATURE_RESTRICTED_USERS setting. GLOBAL_READONLY_SUPER_USERS String When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. 3.31.1. User configuration fields references Use the following references to update your config.yaml file with the desired configuration field. 3.31.1.1. FEATURE_SUPERUSERS_FULL_ACCESS configuration reference --- SUPER_USERS: - quayadmin FEATURE_SUPERUSERS_FULL_ACCESS: True --- 3.31.1.2. GLOBAL_READONLY_SUPER_USERS configuration reference --- GLOBAL_READONLY_SUPER_USERS: - user1 --- 3.31.1.3. FEATURE_RESTRICTED_USERS configuration reference --- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true --- 3.31.1.4. RESTRICTED_USERS_WHITELIST configuration reference Prerequisites FEATURE_RESTRICTED_USERS is set to true in your config.yaml file. --- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true RESTRICTED_USERS_WHITELIST: - user1 --- Note When this field is set, whitelisted users can create organizations, or read or write content from the repository even if FEATURE_RESTRICTED_USERS is set to true . Other users, for example, user2 , user3 , and user4 are restricted from creating organizations, reading, or writing content 3.32. Recaptcha configuration fields Table 3.36. Recaptcha configuration fields Field Type Description FEATURE_RECAPTCHA Boolean Whether Recaptcha is necessary for user login and recovery Default: False RECAPTCHA_SECRET_KEY String If recaptcha is enabled, the secret key for the Recaptcha service RECAPTCHA_SITE_KEY String If recaptcha is enabled, the site key for the Recaptcha service 3.33. ACI configuration fields Table 3.37. ACI configuration fields Field Type Description FEATURE_ACI_CONVERSION Boolean Whether to enable conversion to ACIs Default: False GPG2_PRIVATE_KEY_FILENAME String The filename of the private key used to decrypte ACIs GPG2_PRIVATE_KEY_NAME String The name of the private key used to sign ACIs GPG2_PUBLIC_KEY_FILENAME String The filename of the public key used to encrypt ACIs 3.34. JWT configuration fields Table 3.38. JWT configuration fields Field Type Description JWT_AUTH_ISSUER String The endpoint for JWT users Pattern : ^http(s)?://(.)+USD Example : http://192.168.99.101:6060 JWT_GETUSER_ENDPOINT String The endpoint for JWT users Pattern : ^http(s)?://(.)+USD Example : http://192.168.99.101:6060 JWT_QUERY_ENDPOINT String The endpoint for JWT queries Pattern : ^http(s)?://(.)+USD Example : http://192.168.99.101:6060 JWT_VERIFY_ENDPOINT String The endpoint for JWT verification Pattern : ^http(s)?://(.)+USD Example : http://192.168.99.101:6060 3.35. App tokens configuration fields Table 3.39. App tokens configuration fields Field Type Description FEATURE_APP_SPECIFIC_TOKENS Boolean If enabled, users can create tokens for use by the Docker CLI Default: True APP_SPECIFIC_TOKEN_EXPIRATION String The expiration for external app tokens. Default None Pattern: ^[0-9]+(w|m|d|h|s)USD EXPIRED_APP_SPECIFIC_TOKEN_GC String Duration of time expired external app tokens will remain before being garbage collected Default: 1d 3.36. Miscellaneous configuration fields Table 3.40. Miscellaneous configuration fields Field Type Description ALLOW_PULLS_WITHOUT_STRICT_LOGGING String If true, pulls will still succeed even if the pull audit log entry cannot be written . This is useful if the database is in a read-only state and it is desired for pulls to continue during that time. Default: False AVATAR_KIND String The types of avatars to display, either generated inline (local) or Gravatar (gravatar) Values: local, gravatar BROWSER_API_CALLS_XHR_ONLY Boolean If enabled, only API calls marked as being made by an XHR will be allowed from browsers Default: True DEFAULT_NAMESPACE_MAXIMUM_BUILD_COUNT Number The default maximum number of builds that can be queued in a namespace. Default: None ENABLE_HEALTH_DEBUG_SECRET String If specified, a secret that can be given to health endpoints to see full debug info when not authenticated as a superuser EXTERNAL_TLS_TERMINATION Boolean Set to true if TLS is supported, but terminated at a layer before Quay. Set to false when Quay is running with its own SSL certificates and receiving TLS traffic directly. FRESH_LOGIN_TIMEOUT String The time after which a fresh login requires users to re-enter their password Example: 5m HEALTH_CHECKER String The configured health check Example: ('RDSAwareHealthCheck', {'access_key': 'foo', 'secret_key': 'bar'}) PROMETHEUS_NAMESPACE String The prefix applied to all exposed Prometheus metrics Default: quay PUBLIC_NAMESPACES Array of String If a namespace is defined in the public namespace list, then it will appear on all users' repository list pages, regardless of whether the user is a member of the namespace. Typically, this is used by an enterprise customer in configuring a set of "well-known" namespaces. REGISTRY_STATE String The state of the registry Values: normal or read-only SEARCH_MAX_RESULT_PAGE_COUNT Number Maximum number of pages the user can paginate in search before they are limited Default: 10 SEARCH_RESULTS_PER_PAGE Number Number of results returned per page by search page Default: 10 V2_PAGINATION_SIZE Number The number of results returned per page in V2 registry APIs Default: 50 WEBHOOK_HOSTNAME_BLACKLIST Array of String The set of hostnames to disallow from webhooks when validating, beyond localhost CREATE_PRIVATE_REPO_ON_PUSH Boolean Whether new repositories created by push are set to private visibility Default: True CREATE_NAMESPACE_ON_PUSH Boolean Whether new push to a non-existent organization creates it Default: False NON_RATE_LIMITED_NAMESPACES Array of String If rate limiting has been enabled using FEATURE_RATE_LIMITS , you can override it for specific namespace that require unlimited access. FEATURE_UI_V2 Boolean When set, allows users to try the beta UI environment. Default: True FEATURE_REQUIRE_TEAM_INVITE Boolean Whether to require invitations when adding a user to a team Default: True FEATURE_REQUIRE_ENCRYPTED_BASIC_AUTH Boolean Whether non-encrypted passwords (as opposed to encrypted tokens) can be used for basic auth Default: False FEATURE_RATE_LIMITS Boolean Whether to enable rate limits on API and registry endpoints. Setting FEATURE_RATE_LIMITS to true causes nginx to limit certain API calls to 30 per second. If that feature is not set, API calls are limited to 300 per second (effectively unlimited). Default: False FEATURE_FIPS Boolean If set to true, Red Hat Quay will run using FIPS-compliant hash functions Default: False FEATURE_AGGREGATED_LOG_COUNT_RETRIEVAL Boolean Whether to allow retrieval of aggregated log counts Default: True FEATURE_ANONYMOUS_ACCESS Boolean Whether to allow anonymous users to browse and pull public repositories Default: True FEATURE_DIRECT_LOGIN Boolean Whether users can directly login to the UI Default: True FEATURE_LIBRARY_SUPPORT Boolean Whether to allow for "namespace-less" repositories when pulling and pushing from Docker Default: True FEATURE_PARTIAL_USER_AUTOCOMPLETE Boolean If set to true, autocompletion will apply to partial usernames+ Default: True FEATURE_PERMANENT_SESSIONS Boolean Whether sessions are permanent Default: True FEATURE_PUBLIC_CATALOG Boolean If set to true, the _catalog endpoint returns public repositories. Otherwise, only private repositories can be returned. Default: False 3.37. Legacy configuration fields The following fields are deprecated or obsolete. Table 3.41. Legacy configuration fields Field Type Description FEATURE_BLACKLISTED_EMAILS Boolean If set to true, no new User accounts may be created if their email domain is blacklisted BLACKLISTED_EMAIL_DOMAINS Array of String The list of email-address domains that is used if FEATURE_BLACKLISTED_EMAILS is set to true Example: "example.com", "example.org" BLACKLIST_V2_SPEC String The Docker CLI versions to which Red Hat Quay will respond that V2 is unsupported Example : <1.8.0 Default: <1.6.0 DOCUMENTATION_ROOT String Root URL for documentation links SECURITY_SCANNER_V4_NAMESPACE_WHITELIST String The namespaces for which the security scanner should be enabled FEATURE_RESTRICTED_V1_PUSH Boolean If set to true, only namespaces listed in V1_PUSH_WHITELIST support V1 push Default: True V1_PUSH_WHITELIST Array of String The array of namespace names that support V1 push if FEATURE_RESTRICTED_V1_PUSH is set to true FEATURE_HELM_OCI_SUPPORT Boolean Enable support for Helm artifacts. Default: False 3.38. User interface v2 configuration field Table 3.42. User interface v2 configuration field Field Type Description FEATURE_UI_V2 Boolean When set, allows users to try the beta UI environment. Default: False 3.38.1. v2 user interface configuration With FEATURE_UI_V2 enabled, you can toggle between the current version of the user interface and the new version of the user interface. Important This UI is currently in beta and subject to change. In its current state, users can only create, view, and delete organizations, repositories, and image tags. When running Red Hat Quay in the old UI, timed-out sessions would require that the user input their password again in the pop-up window. With the new UI, users are returned to the main page and required to input their username and password credentials. This is a known issue and will be fixed in a future version of the new UI. There is a discrepancy in how image manifest sizes are reported between the legacy UI and the new UI. In the legacy UI, image manifests were reported in mebibytes. In the new UI, Red Hat Quay uses the standard definition of megabyte (MB) to report image manifest sizes. Procedure In your deployment's config.yaml file, add the FEATURE_UI_V2 parameter and set it to true , for example: --- FEATURE_TEAM_SYNCING: false FEATURE_UI_V2: true FEATURE_USER_CREATION: true --- Log in to your Red Hat Quay deployment. In the navigation pane of your Red Hat Quay deployment, you are given the option to toggle between Current UI and New UI . Click the toggle button to set it to new UI, and then click Use Beta Environment , for example: 3.39. IPv6 configuration field Table 3.43. IPv6 configuration field Field Type Description FEATURE_LISTEN_IP_VERSION String Enables IPv4, IPv6, or dual-stack protocol family. This configuration field must be properly set, otherwise Red Hat Quay fails to start. Default: IPv4 Additional configurations: IPv6 , dual-stack 3.40. Branding configuration fields Table 3.44. Branding configuration fields Field Type Description BRANDING Object Custom branding for logos and URLs in the Red Hat Quay UI. .logo (Required) String Main logo image URL. The header logo defaults to 205x30 PX. The form logo on the Red Hat Quay sign in screen of the web UI defaults to 356.5x39.7 PX. Example: /static/img/quay-horizontal-color.svg .footer_img String Logo for UI footer. Defaults to 144x34 PX. Example: /static/img/RedHat.svg .footer_url String Link for footer image. Example: https://redhat.com 3.40.1. Example configuration for Red Hat Quay branding Branding config.yaml example BRANDING: logo: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_img: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_url: https://opensourceworld.org/ 3.41. Session timeout configuration field The following configuration field relies on on the Flask API configuration field of the same name. Table 3.45. Session logout configuration field Field Type Description PERMANENT_SESSION_LIFETIME Integer A timedelta which is used to set the expiration date of a permanent session. The default is 31 days, which makes a permanent session survive for roughly one month. Default: 2678400 3.41.1. Example session timeout configuration The following YAML is the suggest configuration when enabling session lifetime. Important Altering session lifetime is not recommended. Administrators should be aware of the allotted time when setting a session timeout. If you set the time too early, it might interrupt your workflow. Session timeout YAML configuration PERMANENT_SESSION_LIFETIME: 3000
[ "DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert", "DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert", "DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default", "DISTRIBUTED_STORAGE_CONFIG: rhocsStorage: - RHOCSStorage - access_key: access_key_here secret_key: secret_key_here bucket_name: quay-datastore-9b2108a3-29f5-43f2-a9d5-2872174f9a56 hostname: s3.openshift-storage.svc.cluster.local is_secure: 'true' port: '443' storage_path: /datastorage/registry", "DISTRIBUTED_STORAGE_CONFIG: radosGWStorage: - RadosGWStorage - access_key: <access_key_here> secret_key: <secret_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: true port: '443' storage_path: /datastorage/registry", "DISTRIBUTED_STORAGE_CONFIG: s3Storage: 1 - RadosGWStorage - access_key: <access_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: true secret_key: <secret_key_here> storage_path: /datastorage/registry", "DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage 1 - host: s3.us-east-2.amazonaws.com s3_access_key: ABCDEFGHIJKLMN s3_secret_key: OL3ABCDEFGHIJKLMN s3_bucket: quay_bucket storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - s3Storage", "DISTRIBUTED_STORAGE_CONFIG: googleCloudStorage: - GoogleCloudStorage - access_key: GOOGQIMFB3ABCDEFGHIJKLMN bucket_name: quay-bucket secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - googleCloudStorage", "DISTRIBUTED_STORAGE_CONFIG: azureStorage: - AzureStorage - azure_account_name: azure_account_name_here azure_container: azure_container_here storage_path: /datastorage/registry azure_account_key: azure_account_key_here sas_token: some/path/ endpoint_url: https://[account-name].blob.core.usgovcloudapi.net 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - azureStorage", "DISTRIBUTED_STORAGE_CONFIG: swiftStorage: - SwiftStorage - swift_user: swift_user_here swift_password: swift_password_here swift_container: swift_container_here auth_url: https://example.org/swift/v1/quay auth_version: 1 ca_cert_path: /conf/stack/swift.cert\" storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - swiftStorage", "DISTRIBUTED_STORAGE_CONFIG: nutanixStorage: #storage config name - RadosGWStorage #actual driver - access_key: access_key_here #parameters secret_key: secret_key_here bucket_name: bucket_name_here hostname: hostname_here is_secure: 'true' port: '443' storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config - nutanixStorage", "BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true ssl_*: <path_location_or_certificate>", "DATA_MODEL_CACHE_CONFIG: engine: redis redis_config: primary: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false > replica: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false >", "DATA_MODEL_CACHE_CONFIG: engine: rediscluster redis_config: startup_nodes: - host: <cluster-host> port: <port> password: <password if ssl: true> read_from_replicas: <true|false> skip_full_coverage_check: <true | false> ssl: <true | false >", "DEFAULT_TAG_EXPIRATION: 2w TAG_EXPIRATION_OPTIONS: - 0s - 1d - 1w - 2w - 4w", "**Default:** `False`", "FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 RESET_CHILD_MANIFEST_EXPIRATION: true", "SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin", "FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false", "FEATURE_UI_V2: true FEATURE_LISTEN_IP_VERSION: FEATURE_SUPERUSERS_FULL_ACCESS: true GLOBAL_READONLY_SUPER_USERS: - FEATURE_RESTRICTED_USERS: true RESTRICTED_USERS_WHITELIST: -", "FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: true FEATURE_PROXY_CACHE: true FEATURE_STORAGE_REPLICATION: true DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000", "FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false", "oc create secret generic -n quay-enterprise --from-file config.yaml=./config.yaml init-config-bundle-secret", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret", "oc create -n quay-enterprise -f quayregistry.yaml", "sudo yum install python39", "python3.9 -m pip install --upgrade pip", "pip install bcrypt", "python3.9 -c 'import bcrypt; print(bcrypt.hashpw(b\"subquay12345\", bcrypt.gensalt(12)).decode(\"utf-8\"))'", "FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin", "sudo podman stop quay", "sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}", "curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ \"username\": \"quayadmin\", \"password\":\"quaypass12345\", \"email\": \"[email protected]\", \"access_token\": true}'", "{\"access_token\":\"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\", \"email\":\"[email protected]\",\"encrypted_password\":\"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW\",\"username\":\"quayadmin\"} # gitleaks:allow", "{\"message\":\"Cannot initialize user in a non-empty database\"}", "{\"message\":\"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace.\"}", "sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false", "Login Succeeded!", "curl -X GET -k -H \"Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/superuser/users/", "{ \"users\": [ { \"kind\": \"user\", \"name\": \"quayadmin\", \"username\": \"quayadmin\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": { \"name\": \"quayadmin\", \"hash\": \"3e82e9cbf62d25dec0ed1b4c66ca7c5d47ab9f1f271958298dea856fb26adc4c\", \"color\": \"#e7ba52\", \"kind\": \"user\" }, \"super_user\": true, \"enabled\": true } ] }", "curl -X POST -k --header 'Content-Type: application/json' -H \"Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/ --data '{\"name\": \"testorg\", \"email\": \"[email protected]\"}'", "\"Created\"", "curl -X GET -k --header 'Content-Type: application/json' -H \"Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\" https://min-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg", "{ \"name\": \"testorg\", \"email\": \"[email protected]\", \"avatar\": { \"name\": \"testorg\", \"hash\": \"5f113632ad532fc78215c9258a4fb60606d1fa386c91b141116a1317bf9c53c8\", \"color\": \"#a55194\", \"kind\": \"user\" }, \"is_admin\": true, \"is_member\": true, \"teams\": { \"owners\": { \"name\": \"owners\", \"description\": \"\", \"role\": \"admin\", \"avatar\": { \"name\": \"owners\", \"hash\": \"6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90\", \"color\": \"#c7c7c7\", \"kind\": \"team\" }, \"can_view\": true, \"repo_count\": 0, \"member_count\": 1, \"is_synced\": false } }, \"ordered_teams\": [ \"owners\" ], \"invoice_email\": false, \"invoice_email_address\": null, \"tag_expiration_s\": 1209600, \"is_free_account\": true }", "cp ~/ssl.cert USDQUAY/config cp ~/ssl.key USDQUAY/config cd USDQUAY/config", "SERVER_HOSTNAME: quay-server.example.com PREFERRED_URL_SCHEME: https", "cat storage.crt -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV [...] -----END CERTIFICATE-----", "mkdir -p quay/config/extra_ca_certs cp storage.crt quay/config/extra_ca_certs/ tree quay/config/ ├── config.yaml ├── extra_ca_certs │ ├── storage.crt", "sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:v3.9.10 \"/sbin/my_init\" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller", "sudo podman restart 5a3e82c4a75f", "sudo podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV", "--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldaps://<ldap_url_domain_name> LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com", "--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com ---", "--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com", "FEATURE_SECURITY_NOTIFICATIONS: true FEATURE_SECURITY_SCANNER: true FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX: true SECURITY_SCANNER_INDEXING_INTERVAL: 30 SECURITY_SCANNER_V4_ENDPOINT: http://quay-server.example.com:8081 SECURITY_SCANNER_V4_PSK: MTU5YzA4Y2ZkNzJoMQ== SERVER_HOSTNAME: quay-server.example.com", "FEATURE_GENERAL_OCI_SUPPORT: true", "FEATURE_GENERAL_OCI_SUPPORT: true ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4>", "ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar", "IGNORE_UNKNOWN_MEDIATYPES: true", "AZURE_LOGIN_CONFIG: CLIENT_ID: <client_id> CLIENT_SECRET: <client_secret> OIDC_SERVER: <oidc_server_address_> DEBUGGING: true SERVICE_NAME: Azure AD VERIFIED_EMAIL_CLAIM_NAME: <verified_email> OIDC_ENDPOINT_CUSTOM_PARAMS\": \"authorization_endpoint\": \"some\": \"param\",", "FEATURE_EXTENDED_REPOSITORY_NAMES: true", "--- SUPER_USERS: - quayadmin FEATURE_SUPERUSERS_FULL_ACCESS: True ---", "--- GLOBAL_READONLY_SUPER_USERS: - user1 ---", "--- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true ---", "--- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true RESTRICTED_USERS_WHITELIST: - user1 ---", "--- FEATURE_TEAM_SYNCING: false FEATURE_UI_V2: true FEATURE_USER_CREATION: true ---", "BRANDING: logo: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_img: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_url: https://opensourceworld.org/", "PERMANENT_SESSION_LIFETIME: 3000" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/configure_red_hat_quay/config-fields-intro
Chapter 6. Important links
Chapter 6. Important links Red Hat AMQ 7 Supported Configurations Red Hat AMQ 7 Component Details Revised on 2020-10-08 11:29:40 UTC
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/amq_clients_overview/important_links