title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
5.2. Load Balancing Policy
5.2. Load Balancing Policy Load balancing policy is set for a cluster, which includes one or more hosts that may each have different hardware parameters and available memory. The Red Hat Virtualization Manager uses a load balancing policy to determine which host in a cluster to start a virtual machine on. Load balancing policy also allows the Manager determine when to move virtual machines from over-utilized hosts to under-utilized hosts. The load balancing process runs once every minute for each cluster in a data center. It determines which hosts are over-utilized, which are hosts under-utilized, and which are valid targets for virtual machine migration. The determination is made based on the load balancing policy set by an administrator for a given cluster. The options for load balancing policies are VM_Evenly_Distributed , Evenly_Distributed , Power_Saving , Cluster_Maintenance , and None .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/load_balancing_policy
7.56. fence-agents
7.56. fence-agents 7.56.1. RHBA-2013:0540 - fence-agents bug fix update Updated fence-agents packages that fix two bugs are now available for Red Hat Enterprise Linux 6. Red Hat fence agents are a collection of scripts for handling remote power management for cluster devices. They allow failed or unreachable nodes to be forcibly restarted and removed from the cluster. Bug Fixes BZ# 908409 Previously, when fencing a Red Hat Enterprise Linux cluster node with the fence_soap_vmware fence agent, the agent terminated unexpectedly with a traceback if it was not possible to resolve a hostname of an IP address. With this update, a proper error message is displayed in the described scenario. BZ# 908401 Due to incorrect detection on newline characters during an SSH connection, the fence_drac5 agent could terminate the connection with a traceback when fencing a Red Hat Enterprise Linux cluster node. Only the first fencing action completed successfully but the status of the node was not checked correctly. Consequently, the fence agent failed to report successful fencing. When the "reboot" operation was called, the node was only powered off. With this update, the newline characters are correctly detected and the fencing works as expected. All users of fence-agents are advised to upgrade to these updated packages, which fix these bugs. 7.56.2. RHBA-2013:0286 - fence-agents bug fix and enhancement update Updated fence-agents packages that fix multiple bugs and add four enhancements are now available for Red Hat Enterprise Linux 6. The fence-agents packages provide the Red Hat fence agents to handle remote power management for cluster devices. The fence-agents allow failed or unreachable nodes to be forcibly restarted and removed from the cluster. Bug Fixes BZ# 769798 The speed of fencing is critical because otherwise, broken nodes have more time to corrupt data. Prior to this update, the operation of the fence_vmware_soap fence agent was slower than expected when used on the VMWare vSphere platform with hundreds of virtual machines. With this update, the fencing process is faster and does not terminate if virtual machines without an UID are encountered. BZ# 822507 Prior to this update, the attribute "unique" in XML metadata was set to TRUE (1) by default. This update modifies the underlying code to use FALSE (0) as the default value because fence agents do not use these attributes. BZ#825667 Prior to this update, certain fence agents did not generate correct metadata output. As a result, it was not possible to use the metadata for automatic generation of manual pages and user interfaces. With this update, all fence agents generate their metadata as expected. BZ#842314 Prior to this update, the fence_apc script failed to log into APC power switches where firmware changed the end-of-line marker from CR-LF to LF. This update modifies the script to log into a fence device as expected. BZ#863568 Prior to this update, the fence_rhevm agent failed to run the regular expression get_id regex when using a new href attribute. As a consequence, the plug status was not available. This update modifies the underlying code to show the correct status either as ON or OFF. Enhancements BZ#740869 This update adds the fence_ipdu agent to support IBM iPDU fence devices in Red Hat Enterprise Linux 6. BZ#752449 This update adds the fence_eaton agent to support Eaton ePDU (Enclosure Power Distribution Unit) devices in Red Hat Enterprise Linux 6. BZ# 800650 This update adds symlinks for common fence types that utilize standards-based agents in Red Hat Enterprise Linux 6. BZ#818337 This update adds the fence_bladecenter agent to the fence-agents packages in Red Hat Enterprise Linux 6 to support the --missing-as-off feature for the HP BladeSystem to handle missing nodes as switched off nodes so that fencing can end successfully even if a blade is missing. BZ#837174 This update supports action=metadata via standard input for all fence agents. All users of fence-agents are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/fence-agents
Chapter 29. Migrating from an LDAP Directory to IdM
Chapter 29. Migrating from an LDAP Directory to IdM When an infrastructure has previously deployed an LDAP server for authentication and identity lookups, it is possible to migrate the user data, including passwords, to a new Identity Management instance, without losing user or password data. Identity Management has migration tools to help move directory data and only requires minimal updates to clients. However, the migration process assumes a simple deployment scenario (one LDAP namespace to one IdM namespace). For more complex environments, such as ones with multiple namespaces or custom schema, contact Red Hat support services for assistance. Identity Management has migration tools to help move directory data and only requires minimal updates to clients. However, the migration process assumes a very simple deployment scenario (one LDAP directory namespace to one IdM namespace). 29.1. An Overview of LDAP to IdM Migration The actual migration part of moving from an LDAP server to Identity Management - the process of moving the data from one server to the other - is fairly straightforward. The process is simple: move data, move passwords, and move clients. The crucial part of migration is not data migration; it is deciding how clients are going to be configured to use Identity Management. For each client in the infrastructure, you need to decide what services (such as Kerberos and SSSD) are being used and what services can be used in the final, IdM deployment. A secondary, but significant, consideration is planning how to migrate passwords. Identity Management requires Kerberos hashes for every user account in addition to passwords. Some of the considerations and migration paths for passwords are covered in Section 29.1.2, "Planning Password Migration" . 29.1.1. Planning the Client Configuration Identity Management can support a number of different client configurations, with varying degrees of functionality, flexibility, and security. Decide which configuration is best for each individual client based on its operating system, functional area (such as development machines, production servers, or user laptops), and your IT maintenance priorities. Important The different client configurations are not mutually exclusive . Most environments will have a mix of different ways that clients use to connect to the IdM domain. Administrators must decide which scenario is best for each individual client. 29.1.1.1. Initial Client Configuration (Pre-Migration) Before deciding where you want to go with the client configuration in Identity Management, first establish where you are before the migration. The initial state for almost all LDAP deployments that will be migrated is that there is an LDAP service providing identity and authentication services. Figure 29.1. Basic LDAP Directory and Client Configuration Linux and Unix clients use PAM_LDAP and NSS_LDAP libraries to connect directly to the LDAP services. These libraries allow clients to retrieve user information from the LDAP directory as if the data were stored in /etc/passwd or /etc/shadow . (In real life, the infrastructure may be more complex if a client uses LDAP for identity lookups and Kerberos for authentication or other configurations.) There are structural differences between an LDAP directory and an IdM server, particularly in schema support and the structure of the directory tree. (For more background on those differences, see Section 1.1, "IdM v. LDAP: A More Focused Type of Service" .) While those differences may impact data (especially with the directory tree, which affects entry names), they have little impact on the client configuration , so it really has little impact on migrating clients to Identity Management. 29.1.1.2. Recommended Configuration for Red Hat Enterprise Linux Clients Red Hat Enterprise Linux has a service called the System Security Services Daemon (SSSD). SSSD uses special PAM and NSS libraries ( pam_sss and nss_sss , respectively) which allow SSSD to be integrated very closely with Identity Management and leverage the full authentication and identity features in Identity Management. SSSD has a number of useful features, like caching identity information so that users can log in even if the connection is lost to the central server; these are described in the Red Hat Enterprise Linux Deployment Guide . Unlike generic LDAP directory services (using pam_ldap and nss_ldap ), SSSD establishes relationships between identity and authentication information by defining domains . A domain in SSSD defines four backend functions: authentication, identity lookups, access, and password changes. The SSSD domain is then configured to use a provider to supply the information for any one (or all) of those four functions. An identity provider is always required in the domain configuration. The other three providers are optional; if an authentication, access, or password provider is not defined, then the identity provider is used for that function. SSSD can use Identity Management for all of its backend functions. This is the ideal configuration because it provides the full range of Identity Management functionality, unlike generic LDAP identity providers or Kerberos authentication. For example, during daily operation, SSSD enforces host-based access control rules and security features in Identity Management. Note During the migration process from an LDAP directory to Identity Management, SSSD can seamlessly migrate user passwords without additional user interaction. Figure 29.2. Clients and SSSD with an IdM Backend The ipa-client-install script automatically configured SSSD to use IdM for all four of its backend services, so Red Hat Enterprise Linux clients are set up with the recommended configuration by default. Note This client configuration is only supported for Red Hat Enterprise Linux 6.1 and later and Red Hat Enterprise Linux 5.7 later, which support the latest versions of SSSD and ipa-client . Older versions of Red Hat Enterprise Linux can be configured as described in Section 29.1.1.3, "Alternative Supported Configuration" . Note This client configuration is only supported for Red Hat Enterprise Linux 15 and later, which supports the latest versions of SSSD and ipa-client . Older versions of Red Hat Enterprise Linux can be configured as described in Section 29.1.1.3, "Alternative Supported Configuration" . 29.1.1.3. Alternative Supported Configuration Unix and Linux systems such as Mac, Solaris, HP-UX, AIX, and Scientific Linux support all of the services that IdM manages but do not use SSSD. Likewise, older Red Hat Enterprise Linux versions (6.1 and 5.6) support SSSD but have an older version, which does not support IdM as an identity provider. Unix and Linux systems such as Mac, Solaris, HP-UX, AIX, and Scientific Linux support all of the services that IdM manages but do not use SSSD. Likewise, older Red Hat Enterprise Linux versions (15) support SSSD but have an older version, which does not support IdM as an identity provider. When it is not possible to use a modern version of SSSD on a system, then clients can be configured to connect to the IdM server as if it were an LDAP directory service for identity lookups (using nss_ldap ) and to IdM as if it were a regular Kerberos KDC (using pam_krb5 ). Figure 29.3. Clients and IdM with LDAP and Kerberos If a Red Hat Enterprise Linux client is using an older version of SSSD, SSSD can still be configured to use the IdM server as its identity provider and its Kerberos authentication domain; this is described in the SSSD configuration section of the Red Hat Enterprise Linux Deployment Guide . Any IdM domain client can be configured to use nss_ldap and pam_krb5 to connect to the IdM server. For some maintenance situations and IT structures, a scenario that fits the lowest common denominator may be required, using LDAP for both identity and authentication ( nss_ldap and pam_ldap ). However, it is generally best practice to use the most secure configuration possible for a client (meaning SSSD and Kerberos or LDAP and Kerberos). 29.1.2. Planning Password Migration Probably the most visible issue that can impact LDAP-to-Identity Management migration is migrating user passwords. Identity Management (by default) uses Kerberos for authentication and requires that each user has Kerberos hashes stored in the Identity Management Directory Server in addition to the standard user passwords. To generate these hashes, the user password needs to be available to the IdM server in cleartext. This is the case when the user is created in Identity Management. However, when the user is migrated from an LDAP directory, the associated user password is already hashed, so the corresponding Kerberos key cannot be generated. Important Users cannot authenticate to the IdM domain or access IdM resources until they have Kerberos hashes. If a user does not have a Kerberos hash [10] , that user cannot log into the IdM domain even if he has a user account. There are three options for migrating passwords: forcing a password change, using a web page, and using SSSD. Migrating users from an existing system provides a smoother transition but also requires parallel management of LDAP directory and IdM during the migration and transition process. If you do not preserve passwords, the migration can be performed more quickly but it requires more manual work by administrators and users. 29.1.2.1. Method 1: Using Temporary Passwords and Requiring a Change When passwords are changed in Identity Management, they will be created with the appropriate Kerberos hashes. So one alternative for administrators is to force users to change their passwords by resetting all user passwords when user accounts are migrated. (This can also be done simply by re-creating the LDAP directory accounts in IdM, which automatically creates accounts with the appropriate keys.) The new users are assigned a temporary password which they change at the first login. No passwords are migrated. 29.1.2.2. Method 2: Using the Migration Web Page When it is running in migration mode, Identity Management has a special web page in its web UI that will capture a cleartext password and create the appropriate Kerberos hash. Administrators could tell users to authenticate once to this web page, which would properly update their user accounts with their password and corresponding Kerberos hash, without requiring password changes. 29.1.2.3. Method 3: Using SSSD (Recommended) SSSD can work with IdM to mitigate the user impact on migrating by generating the required user keys. For deployments with a lot of users or where users shouldn't be burdened with password changes, this is the best scenario. A user tries to log into a machine with SSSD. SSSD attempts to perform Kerberos authentication against the IdM server. Even though the user exists in the system, the authentication will fail with the error key type is not supported because the Kerberos hashes do not yet exist. SSSD then performs a plaintext LDAP bind over a secure connection. IdM intercepts this bind request. If the user has a Kerberos principal but no Kerberos hashes, then the IdM identity provider generates the hashes and stores them in the user entry. If authentication is successful, SSSD disconnects from IdM and tries Kerberos authentication again. This time, the request succeeds because the hash exists in the entry. That entire process is entirely transparent to the user; as far as users known, they simply log into a client service and it works as normal. 29.1.2.4. Migrating Cleartext LDAP Passwords Although in most deployments LDAP passwords are stored encrypted, there may be some users or some environments that use cleartext passwords for user entries. When users are migrated from the LDAP server to the IdM server, their cleartext passwords are not migrated over. Identity Management does not allow cleartext passwords. Instead, a Kerberos principle is created for the user, the keytab is set to true, and the password is set as expired. This means that Identity Management requires the user to reset the password at the login. Note If passwords are hashed, the password is successfully migrated through SSSD and the migration web page, as in Section 29.1.2.3, "Method 3: Using SSSD (Recommended)" . 29.1.2.5. Automatically Resetting Passwords That Do Not Meet Requirements If user passwords in the original directory do not meet the password policies defined in Identity Management, then the passwords must be reset after migration. Password resets are done automatically the first time the users attempts to kinit into the IdM domain. 29.1.3. Migration Considerations and Requirements As you are planning migrating from an LDAP server to Identity Management, make sure that your LDAP environment is able to work with the Identity Management migration script. 29.1.3.1. LDAP Servers Supported for Migration The migration process from an LDAP server to Identity Management uses a special script, ipa migrate-ds , to perform the migration. This script has certain expectations about the structure of the LDAP directory and LDAP entries in order to work. Migration is supported only for LDAPv3-compliant directory services, which include several common directories: SunONE Directory Server Apache Directory Server OpenLDAP Migration from an LDAP server to Identity Management has been tested with Red Hat Directory Server. Note Migration using the migration script is not supported for Microsoft Active Directory because it is not an LDAPv3-compliant directory. For assistance with migrating from Active Directory, contact Red Hat Professional Services. Note Migration using the migration script is not supported for Microsoft Active Directory because it is not an LDAPv3-compliant directory. 29.1.3.2. Migration Environment Requirements There are many different possible configuration scenarios for both Red Hat Directory Server and Identity Management, and any of those scenarios may affect the migration process. For the example migration procedures in this chapter, these are the assumptions about the environment: A single LDAP directory domain is being migrated to one IdM realm. No consolidation is involved. User passwords are stored as a hash in the LDAP directory that the IdM Directory Server can support. The LDAP directory instance is both the identity store and the authentication method. Client machines are configured to use pam_ldap or nss_ldap to connect to the LDAP server. Entries use only standard LDAP schema. Custom attributes will not be migrated to Identity Management. 29.1.3.3. Migration Tools Identity Management uses a specific command, ipa migrate-ds , to drive the migration process so that LDAP directory data are properly formatted and imported cleanly into the IdM server. The Identity Management server must be configured to run in migration mode, and then the migration script can be used. 29.1.3.4. Migration Sequence There are four major steps when migrating to Identity Management, but the order varies slightly depending on whether you want to migrate the server first or the clients first. With a client-based migration, SSSD is used to change the client configuration while an IdM server is configured: Deploy SSSD. Reconfigure clients to connect to the current LDAP server and then fail over to IdM. Install the IdM server. Migrate the user data using the IdM ipa migrate-ds script. This exports the data from the LDAP directory, formats for the IdM schema, and then imports it into IdM. Take the LDAP server offline and allow clients to fail over to Identity Management transparently. With a server migration, the LDAP to Identity Management migration comes first: Install the IdM server. Migrate the user data using the IdM ipa migrate-ds script. This exports the data from the LDAP directory, formats it for the IdM schema, and then imports it into IdM. Optional. Deploy SSSD. Reconfigure clients to connect to IdM. It is not possible to simply replace the LDAP server. The IdM directory tree - and therefore user entry DNs - is different than the directory tree. While it is required that clients be reconfigured, clients do not need to be reconfigured immediately. Updated clients can point to the IdM server while other clients point to the old LDAP directory, allowing a reasonable testing and transition phase after the data are migrated. Note Do not run both an LDAP directory service and the IdM server for very long in parallel. This introduces the risk of user data being inconsistent between the two services. Both processes provide a general migration procedure, but it may not work in every environment. Set up a test LDAP environment and test the migration process before attempting to migrate the real LDAP environment. [10] It is possible to use LDAP authentication in Identity Management instead of Kerberos authentication, which means that Kerberos hashes are not required for users. However, this limits the capabilities of Identity Management and is not recommended.
[ "https://ipaserver.example.com/ipa/migration", "[jsmith@server ~]USD kinit Password for [email protected]: Password expired. You must change it now. Enter new password: Enter it again:" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/migrating_from_a_directory_server_to_ipa
13.3. Build and Deploy the Hello World Quickstart
13.3. Build and Deploy the Hello World Quickstart Before building and deploying the quickstart, ensure that all the listed prerequisites are met and that the two application server instances are running (see Section 13.2, "Start Two Application Server Instances" for details). Procedure 13.3. Build and Deploy the Hello World Quickstart Navigate to the Required Directory In the command line terminal, navigate to the root directory of the quickstart on the command line interface. Build and Deploy to the First Application Server Instance Use the following command to build and deploy the quickstart to the first application server instance as follows: This command deploys target/ jboss-helloworld-jdg.war to the first running server instance. Build and Deploy to the Second Application Server Instance Use the following command to build and deploy the quickstart to the second application server instance with the specified ports as follows: This command deploys target/ jboss-helloworld-jdg.war to the second running server instance. Report a bug
[ "mvn clean package jboss-as:deploy", "mvn clean package jboss-as:deploy -Djboss-as.port=10099" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/build_and_deploy_the_hello_world_quickstart
Chapter 15. Understanding low latency tuning for cluster nodes
Chapter 15. Understanding low latency tuning for cluster nodes Edge computing has a key role in reducing latency and congestion problems and improving application performance for telco and 5G network applications. Maintaining a network architecture with the lowest possible latency is key for meeting the network performance requirements of 5G. Compared to 4G technology, with an average latency of 50 ms, 5G is targeted to reach latency of 1 ms or less. This reduction in latency boosts wireless throughput by a factor of 10. 15.1. About low latency Many of the deployed applications in the Telco space require low latency that can only tolerate zero packet loss. Tuning for zero packet loss helps mitigate the inherent issues that degrade network performance. For more information, see Tuning for Zero Packet Loss in Red Hat OpenStack Platform (RHOSP) . The Edge computing initiative also comes in to play for reducing latency rates. Think of it as being on the edge of the cloud and closer to the user. This greatly reduces the distance between the user and distant data centers, resulting in reduced application response times and performance latency. Administrators must be able to manage their many Edge sites and local services in a centralized way so that all of the deployments can run at the lowest possible management cost. They also need an easy way to deploy and configure certain nodes of their cluster for real-time low latency and high-performance purposes. Low latency nodes are useful for applications such as Cloud-native Network Functions (CNF) and Data Plane Development Kit (DPDK). OpenShift Container Platform currently provides mechanisms to tune software on an OpenShift Container Platform cluster for real-time running and low latency (around <20 microseconds reaction time). This includes tuning the kernel and OpenShift Container Platform set values, installing a kernel, and reconfiguring the machine. But this method requires setting up four different Operators and performing many configurations that, when done manually, is complex and could be prone to mistakes. OpenShift Container Platform uses the Node Tuning Operator to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator uses this performance profile configuration that makes it easier to make these changes in a more reliable way. The administrator can specify whether to update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolate CPUs for application containers to run the workloads. OpenShift Container Platform also supports workload hints for the Node Tuning Operator that can tune the PerformanceProfile to meet the demands of different industry environments. Workload hints are available for highPowerConsumption (very low latency at the cost of increased power consumption) and realTime (priority given to optimum latency). A combination of true/false settings for these hints can be used to deal with application-specific workload profiles and requirements. Workload hints simplify the fine-tuning of performance to industry sector settings. Instead of a "one size fits all" approach, workload hints can cater to usage patterns such as placing priority on: Low latency Real-time capability Efficient use of power Ideally, all of the previously listed items are prioritized. Some of these items come at the expense of others however. The Node Tuning Operator is now aware of the workload expectations and better able to meet the demands of the workload. The cluster admin can now specify into which use case that workload falls. The Node Tuning Operator uses the PerformanceProfile to fine tune the performance settings for the workload. The environment in which an application is operating influences its behavior. For a typical data center with no strict latency requirements, only minimal default tuning is needed that enables CPU partitioning for some high performance workload pods. For data centers and workloads where latency is a higher priority, measures are still taken to optimize power consumption. The most complicated cases are clusters close to latency-sensitive equipment such as manufacturing machinery and software-defined radios. This last class of deployment is often referred to as Far edge. For Far edge deployments, ultra-low latency is the ultimate priority, and is achieved at the expense of power management. 15.2. About Hyper-Threading for low latency and real-time applications Hyper-Threading is an Intel processor technology that allows a physical CPU processor core to function as two logical cores, executing two independent threads simultaneously. Hyper-Threading allows for better system throughput for certain workload types where parallel processing is beneficial. The default OpenShift Container Platform configuration expects Hyper-Threading to be enabled. For telecommunications applications, it is important to design your application infrastructure to minimize latency as much as possible. Hyper-Threading can slow performance times and negatively affect throughput for compute-intensive workloads that require low latency. Disabling Hyper-Threading ensures predictable performance and can decrease processing times for these workloads. Note Hyper-Threading implementation and configuration differs depending on the hardware you are running OpenShift Container Platform on. Consult the relevant host hardware tuning information for more details of the Hyper-Threading implementation specific to that hardware. Disabling Hyper-Threading can increase the cost per core of the cluster. Additional resources Configuring Hyper-Threading for a cluster
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/scalability_and_performance/cnf-understanding-low-latency
Chapter 14. Security
Chapter 14. Security OpenSSH chroot Shell Logins Generally, each Linux user is mapped to an SELinux user using SELinux policy, enabling Linux users to inherit the restrictions placed on SELinux users. There is a default mapping in which Linux users are mapped to the SELinux unconfined_u user. In Red Hat Enterprise Linux 7, the ChrootDirectory option for chrooting users can be used with unconfined users without any change, but for confined users, such as staff_u, user_u, or guest_u, the SELinux selinuxuser_use_ssh_chroot variable has to be set. Administrators are advised to use the guest_u user for all chrooted users when using the ChrootDirectory option to achieve higher security. OpenSSH - Multiple Required Authentications Red Hat Enterprise Linux 7 supports multiple required authentications in SSH protocol version 2 using the AuthenticationMethods option. This option lists one or more comma-separated lists of authentication method names. Successful completion of all the methods in any list is required for authentication to complete. This enables, for example, requiring a user to have to authenticate using the public key or GSSAPI before they are offered password authentication. GSS Proxy GSS Proxy is the system service that establishes GSS API Kerberos context on behalf of other applications. This brings security benefits; for example, in a situation when the access to the system keytab is shared between different processes, a successful attack against that process leads to Kerberos impersonation of all other processes. Changes in NSS The nss packages have been upgraded to upstream version 3.15.2. Message-Digest algorithm 2 (MD2), MD4, and MD5 signatures are no longer accepted for online certificate status protocol (OCSP) or certificate revocation lists (CRLs), consistent with their handling for general certificate signatures. Advanced Encryption Standard Galois Counter Mode (AES-GCM) Cipher Suite (RFC 5288 and RFC 5289) has been added for use when TLS 1.2 is negotiated. Specifically, the following cipher suites are now supported: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256; TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256; TLS_DHE_RSA_WITH_AES_128_GCM_SHA256; TLS_RSA_WITH_AES_128_GCM_SHA256. New Boolean Names Several SELinux boolean names have been changed to be more domain-specific. The old names can still be used, however, only the new names will appear in the lists of booleans. The old boolean names and their respective new names are available from the /etc/selinux/<policy_type>/booleans.subs_dist file. SCAP Workbench SCAP Workbench is a GUI front end that provides scanning functionality for SCAP content. SCAP Workbench is included as a Technology Preview in Red Hat Enterprise Linux 7. You can find detailed information on the website of the upstream project: https://fedorahosted.org/scap-workbench/ OSCAP Anaconda Add-On Red Hat Enterprise Linux 7 introduces the OSCAP Anaconda add-on as a Technology Preview. The add-on integrates OpenSCAP utilities with the installation process and enables installation of a system following restrictions given by SCAP content.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-security
Chapter 4. Branding and chroming the graphical user interface
Chapter 4. Branding and chroming the graphical user interface The customization of Anaconda user interface may include the customization of graphical elements and the customization of product name. This section provides information about how to customize the graphical elements and the product name. Prerequisites You have downloaded and extracted the ISO image. You have created your own branding material. For information about downloading and extracting boot images, see Extracting Red Hat Enterprise Linux boot images The user interface customization involves the following high-level tasks: Complete the prerequisites. Create custom branding material (if you plan to customize the graphical elements) Customize the graphical elements (if you plan to customize it) Customize the product name (if you plan to customize it) Create a product.img file Create a custom Boot image Note To create the custom branding material, first refer to the default graphical element files type and dimensions. You can accordingly create the custom material. Details about default graphical elements are available in the sample files that are provided in the Customizing graphical elements section. 4.1. Customizing graphical elements To customize the graphical elements, you can modify or replace the customisable elements with the custom branded material, and update the container files. The customisable graphical elements of the installer are stored in the /usr/share/anaconda/pixmaps/ directory in the installer runtime file system. This directory contains the following customisable files: Additionally, the /usr/share/anaconda/ directory contains a CSS stylesheet named anaconda-gtk.css , which determines the file names and parameters of the main UI elements - the logo and the backgrounds for the sidebar and top bar. The file has the following contents that can be customized as per your requirement: The most important part of the CSS file is the way in which it handles scaling based on resolution. The PNG image backgrounds do not scale, they are always displayed in their true dimensions. Instead, the backgrounds have a transparent background, and the stylesheet defines a matching background color on the @define-color line. Therefore, the background images "fade" into the background color , which means that the backgrounds work on all resolutions without a need for image scaling. You could also change the background-repeat parameters to tile the background, or, if you are confident that every system you will be installing on will have the same display resolution, you can use background images which fill the entire bar. Any of the files listed above can be customized. Once you do so, follow the instructions in Section 2.2, "Creating a product.img File" to create your own product.img with custom graphics, and then Section 2.3, "Creating Custom Boot Images" to create a new bootable ISO image with your changes included. 4.2. Customizing the product name To customize the product name, you must create a custom .buildstamp file . To do so, create a new file .buildstamp.py with the following content: Change My Distribution to the name which you want to display in the installer. After you create the custom .buildstamp file, follow the steps in Creating a product.img file section to create a new product.img file containing your customizations, and the Creating custom boot images section to create a new bootable ISO file with your changes included. 4.3. Customizing the Default Configuration You can create your own configuration file and use it to customize the configuration of the installer. 4.3.1. Configuring the default configuration files You can write the Anaconda configuration files in the .ini file format. The Anaconda configuration file consists of sections, options and comments. Each section is defined by a [section] header, the comments starting with a # character and the keys to define the options . The resulting configuration file is processed with the configparser configuration file parser. The default configuration file, located at /etc/anaconda/anaconda.conf , contains the documented sections and options that are supported. The file provides a full default configuration of the installer. You can modify the configuration of the product configuration files from /etc/anaconda/product.d/ and the custom configuration files from /etc/anaconda/conf.d/ . The following configuration file describes the default configuration of RHEL 9: 4.3.2. Configuring the product configuration files The product configuration files have one or two extra sections that identify the product. The [Product] section specifies the product name of a product. The [Base Product] section specifies the product name of a base product if any. For example, Red Hat Enterprise Linux is a base product of Red Hat Virtualization. The installer loads configuration files of the base products before it loads the configuration file of the specified product. For example, it will first load the configuration for Red Hat Enterprise Linux and then the configuration for Red Hat Virtualization. See an example of the product configuration file for Red Hat Enterprise Linux: See an example of the product configuration file for Red Hat Virtualization: To customize the installer configuration for your product, you must create a product configuration file. Create a new file named my-distribution.conf , with content similar to the example above. Change product_name in the [Product] section to the name of your product, for example My Distribution. The product name should be the same as the name used in the .buildstamp file. After you create the custom configuration file, follow the steps in Creating a product.img file section to create a new product.img file containing your customizations, and the Creating custom boot images to create a new bootable ISO file with your changes included. 4.3.3. Configuring the custom configuration files To customize the installer configuration independently of the product name, you must create a custom configuration file. To do so, create a new file named 100-my-configuration.conf with the content similar to the example in Configuring the default configuration files and omit the [Product] and [Base Product] sections. After you create the custom configuration file, follow the steps in Creating a product.img file section to create a new product.img file containing your customizations, and the Creating custom boot images to create a new bootable ISO file with your changes included.
[ "pixmaps ├─ anaconda-password-show-off.svg ├─ anaconda-password-show-on.svg ├─ right-arrow-icon.png ├─ sidebar-bg.png ├─ sidebar-logo.png └─ topbar-bg.png", "/* theme colors/images */ @define-color product_bg_color @redhat; /* logo and sidebar classes */ .logo-sidebar { background-image: url('/usr/share/anaconda/pixmaps/sidebar-bg.png'); background-color: @product_bg_color; background-repeat: no-repeat; } /* Add a logo to the sidebar */ .logo { background-image: url('/usr/share/anaconda/pixmaps/sidebar-logo.png'); background-position: 50% 20px; background-repeat: no-repeat; background-color: transparent; } /* This is a placeholder to be filled by a product-specific logo. */ .product-logo { background-image: none; background-color: transparent; } AnacondaSpokeWindow #nav-box { background-color: @product_bg_color; background-image: url('/usr/share/anaconda/pixmaps/topbar-bg.png'); background-repeat: no-repeat; color: white; }", "[Main] Product=My Distribution Version=9 BugURL=https://bugzilla.redhat.com/ IsFinal=True UUID=202007011344.x86_64 [Compose] Lorax=28.14.49-1", "Run Anaconda in the debugging mode. debug = False Enable Anaconda addons. This option is deprecated and will be removed in the future. addons_enabled = True List of enabled Anaconda DBus modules. This option is deprecated and will be removed in the future. kickstart_modules = List of Anaconda DBus modules that can be activated. Supported patterns: MODULE.PREFIX. , MODULE.NAME activatable_modules = org.fedoraproject.Anaconda.Modules. org.fedoraproject.Anaconda.Addons.* List of Anaconda DBus modules that are not allowed to run. Supported patterns: MODULE.PREFIX. , MODULE.NAME forbidden_modules = # List of Anaconda DBus modules that can fail to run. # The installation won't be aborted because of them. # Supported patterns: MODULE.PREFIX. , MODULE.NAME optional_modules = org.fedoraproject.Anaconda.Modules.Subscription org.fedoraproject.Anaconda.Addons.* Should the installer show a warning about enabled SMT? can_detect_enabled_smt = False Type of the installation target. type = HARDWARE A path to the physical root of the target. physical_root = /mnt/sysimage A path to the system root of the target. system_root = /mnt/sysroot Should we install the network configuration? can_configure_network = True Network device to be activated on boot if none was configured so. Valid values: # NONE No device DEFAULT_ROUTE_DEVICE A default route device FIRST_WIRED_WITH_LINK The first wired device with link # default_on_boot = NONE Default package environment. default_environment = List of ignored packages. ignored_packages = Names of repositories that provide latest updates. updates_repositories = List of .treeinfo variant types to enable. Valid items: # addon optional variant # enabled_repositories_from_treeinfo = addon optional variant Enable installation from the closest mirror. enable_closest_mirror = True Default installation source. Valid values: # CLOSEST_MIRROR Use closest public repository mirror. CDN Use Content Delivery Network (CDN). # default_source = CLOSEST_MIRROR Enable ssl verification for all HTTP connection verify_ssl = True GPG keys to import to RPM database by default. Specify paths on the installed system, each on a line. Substitutions for USDreleasever and USDbasearch happen automatically. default_rpm_gpg_keys = Enable SELinux usage in the installed system. Valid values: # -1 The value is not set. 0 SELinux is disabled. 1 SELinux is enabled. # selinux = -1 Type of the boot loader. Supported values: # DEFAULT Choose the type by platform. EXTLINUX Use extlinux as the boot loader. # type = DEFAULT Name of the EFI directory. efi_dir = default Hide the GRUB menu. menu_auto_hide = False Are non-iBFT iSCSI disks allowed? nonibft_iscsi_boot = False Arguments preserved from the installation system. preserved_arguments = cio_ignore rd.znet rd_ZNET zfcp.allow_lun_scan speakup_synth apic noapic apm ide noht acpi video pci nodmraid nompath nomodeset noiswmd fips selinux biosdevname ipv6.disable net.ifnames net.ifnames.prefix nosmt Enable dmraid usage during the installation. dmraid = True Enable iBFT usage during the installation. ibft = True Do you prefer creation of GPT disk labels? gpt = False Tell multipathd to use user friendly names when naming devices during the installation. multipath_friendly_names = True Do you want to allow imperfect devices (for example, degraded mdraid array devices)? allow_imperfect_devices = False Default file system type. Use whatever Blivet uses by default. file_system_type = Default partitioning. Specify a mount point and its attributes on each line. # Valid attributes: # size <SIZE> The size of the mount point. min <MIN_SIZE> The size will grow from MIN_SIZE to MAX_SIZE. max <MAX_SIZE> The max size is unlimited by default. free <SIZE> The required available space. # default_partitioning = / (min 1 GiB, max 70 GiB) /home (min 500 MiB, free 50 GiB) Default partitioning scheme. Valid values: # PLAIN Create standard partitions. BTRFS Use the Btrfs scheme. LVM Use the LVM scheme. LVM_THINP Use LVM Thin Provisioning. # default_scheme = LVM Default version of LUKS. Valid values: # luks1 Use version 1 by default. luks2 Use version 2 by default. # luks_version = luks2 Minimal size of the total memory. min_ram = 320 MiB Minimal size of the available memory for LUKS2. luks2_min_ram = 128 MiB Should we recommend to specify a swap partition? swap_is_recommended = False Recommended minimal sizes of partitions. Specify a mount point and a size on each line. min_partition_sizes = / 250 MiB /usr 250 MiB /tmp 50 MiB /var 384 MiB /home 100 MiB /boot 200 MiB Required minimal sizes of partitions. Specify a mount point and a size on each line. req_partition_sizes = Allowed device types of the / partition if any. Valid values: # LVM Allow LVM. MD Allow RAID. PARTITION Allow standard partitions. BTRFS Allow Btrfs. DISK Allow disks. LVM_THINP Allow LVM Thin Provisioning. # root_device_types = Mount points that must be on a linux file system. Specify a list of mount points. must_be_on_linuxfs = / /var /tmp /usr /home /usr/share /usr/lib Paths that must be directories on the / file system. Specify a list of paths. must_be_on_root = /bin /dev /sbin /etc /lib /root /mnt lost+found /proc Paths that must NOT be directories on the / file system. Specify a list of paths. must_not_be_on_root = Mount points that are recommended to be reformatted. # It will be recommended to create a new file system on a mount point that has an allowed prefix, but does not have a blocked one. Specify lists of mount points. reformat_allowlist = /boot /var /tmp /usr reformat_blocklist = /home /usr/local /opt /var/www The path to a custom stylesheet. custom_stylesheet = The path to a directory with help files. help_directory = /usr/share/anaconda/help A list of spokes to hide in UI. FIXME: Use other identification then names of the spokes. hidden_spokes = Should the UI allow to change the configured root account? can_change_root = False Should the UI allow to change the configured user accounts? can_change_users = False Define the default password policies. Specify a policy name and its attributes on each line. # Valid attributes: # quality <NUMBER> The minimum quality score (see libpwquality). length <NUMBER> The minimum length of the password. empty Allow an empty password. strict Require the minimum quality. # password_policies = root (quality 1, length 6) user (quality 1, length 6, empty) luks (quality 1, length 6) A path to EULA (if any) # If the given distribution has an EULA & feels the need to tell the user about it fill in this variable by a path pointing to a file with the EULA on the installed system. # This is currently used just to show the path to the file to the user at the end of the installation. eula =", "Anaconda configuration file for Red Hat Enterprise Linux. [Product] product_name = Red Hat Enterprise Linux Show a warning if SMT is enabled. can_detect_enabled_smt = True [Network] default_on_boot = DEFAULT_ROUTE_DEVICE [Payload] ignored_packages = ntfsprogs btrfs-progs dmraid enable_closest_mirror = False default_source = CDN [Boot loader] efi_dir = redhat [Storage] file_system_type = xfs default_partitioning = / (min 1 GiB, max 70 GiB) /home (min 500 MiB, free 50 GiB) swap [Storage Constraints] swap_is_recommended = True [User Interface] help_directory = /usr/share/anaconda/help/rhel [License] eula = /usr/share/redhat-release/EULA", "Anaconda configuration file for Red Hat Virtualization. [Product] product_name = Red Hat Virtualization (RHVH) [Base Product] product_name = Red Hat Enterprise Linux [Storage] default_scheme = LVM_THINP default_partitioning = / (min 6 GiB) /home (size 1 GiB) /tmp (size 1 GiB) /var (size 15 GiB) /var/crash (size 10 GiB) /var/log (size 8 GiB) /var/log/audit (size 2 GiB) swap [Storage Constraints] root_device_types = LVM_THINP must_not_be_on_root = /var req_partition_sizes = /var 10 GiB /boot 1 GiB" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/customizing_anaconda/branding-and-chroming-the-graphical-user-interface_customizing-anaconda
3.9. Searching for Clusters
3.9. Searching for Clusters The following table describes all search options for clusters. Table 3.5. Searching Clusters Property (of resource or resource-type) Type Description (Reference) Datacenter. datacenter-prop Depends on property type The property of the data center associated with the cluster. Datacenter String The data center to which the cluster belongs. name String The unique name that identifies the clusters on the network. description String The description of the cluster. initialized String True or False indicating the status of the cluster. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Clusters: initialized = true or name = Default This example returns a list of clusters which are initialized or named Default.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/searching_for_clusters
7.117. lsscsi
7.117. lsscsi 7.117.1. RHBA-2015:0798 - lsscsi bug fix update Updated lsscsi packages that fix one bug are now available for Red Hat Enterprise Linux 6. The lsscsi utility uses information provided by the sysfs pseudo file system in Linux kernel 2.6 and later series to list small computer system interface (SCSI) devices or all SCSI hosts attached to the system. Options can be used to control the amount and form of information provided for each device. Bug Fix BZ# 1009883 The lsscsi package has been updated to properly detect and decode the SCSI "protection_type" and "integrity" flags. Previously, the lsscsi package tried to read the "protection_type" and "integrity" flags from a location in the sysfs file system where they were not expected to be found. With this update, lsscsi now uses the proper file locations to identify these flags. Users of lsscsi are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-lsscsi
Chapter 2. Prerequisites
Chapter 2. Prerequisites Installer-provisioned installation of OpenShift Container Platform requires: One provisioner node with Red Hat Enterprise Linux (RHEL) 9.x installed. The provisioner can be removed after installation. Three control plane nodes Baseboard management controller (BMC) access to each node At least one network: One required routable network One optional provisioning network One optional management network Before starting an installer-provisioned installation of OpenShift Container Platform, ensure the hardware environment meets the following requirements. 2.1. Node requirements Installer-provisioned installation involves a number of hardware node requirements: CPU architecture: All nodes must use x86_64 or aarch64 CPU architecture. Similar nodes: Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration. Baseboard Management Controller: The provisioner node must be able to access the baseboard management controller (BMC) of each OpenShift Container Platform cluster node. You may use IPMI, Redfish, or a proprietary protocol. Latest generation: Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, RHEL 9.x ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support RHEL 9.x for the provisioner node and RHCOS 9.x for the control plane and worker nodes. Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node. Provisioner node: Installer-provisioned installation requires one provisioner node. Control plane: Installer-provisioned installation requires three control plane nodes for high availability. You can deploy an OpenShift Container Platform cluster with only three control plane nodes, making the control plane nodes schedulable as worker nodes. Smaller clusters are more resource efficient for administrators and developers during development, production, and testing. Worker nodes: While not required, a typical production cluster has two or more worker nodes. Important Do not deploy a cluster with only one worker node, because the cluster will deploy with routers and ingress traffic in a degraded state. Network interfaces: Each node must have at least one network interface for the routable baremetal network. Each node must have one network interface for a provisioning network when using the provisioning network for deployment. Using the provisioning network is the default configuration. Note Only one network card (NIC) on the same subnet can route traffic through the gateway. By default, Address Resolution Protocol (ARP) uses the lowest numbered NIC. Use a single NIC for each node in the same subnet to ensure that network load balancing works as expected. When using multiple NICs for a node in the same subnet, use a single bond or team interface. Then add the other IP addresses to that interface in the form of an alias IP address. If you require fault tolerance or load balancing at the network interface level, use an alias IP address on the bond or team interface. Alternatively, you can disable a secondary NIC on the same subnet or ensure that it has no IP address. Unified Extensible Firmware Interface (UEFI): Installer-provisioned installation requires UEFI boot on all OpenShift Container Platform nodes when using IPv6 addressing on the provisioning network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the provisioning network NIC, but omitting the provisioning network removes this requirement. Important When starting the installation from virtual media such as an ISO image, delete all old UEFI boot table entries. If the boot table includes entries that are not generic entries provided by the firmware, the installation might fail. Secure Boot: Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. You may deploy with Secure Boot manually or managed. Manually: To deploy an OpenShift Container Platform cluster with Secure Boot manually, you must enable UEFI boot mode and Secure Boot on each control plane node and each worker node. Red Hat supports Secure Boot with manually enabled UEFI and Secure Boot only when installer-provisioned installations use Redfish virtual media. See "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section for additional details. Managed: To deploy an OpenShift Container Platform cluster with managed Secure Boot, you must set the bootMode value to UEFISecureBoot in the install-config.yaml file. Red Hat only supports installer-provisioned installation with managed Secure Boot on 10th generation HPE hardware and 13th generation Dell hardware running firmware version 2.75.75.75 or greater. Deploying with managed Secure Boot does not require Redfish virtual media. See "Configuring managed Secure Boot" in the "Setting up the environment for an OpenShift installation" section for details. Note Red Hat does not support managing self-generated keys, or other keys, for Secure Boot. 2.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.1. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHEL 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 2.3. Planning a bare metal cluster for OpenShift Virtualization If you will use OpenShift Virtualization, it is important to be aware of several requirements before you install your bare metal cluster. If you want to use live migration features, you must have multiple worker nodes at the time of cluster installation . This is because live migration requires the cluster-level high availability (HA) flag to be set to true. The HA flag is set when a cluster is installed and cannot be changed afterwards. If there are fewer than two worker nodes defined when you install your cluster, the HA flag is set to false for the life of the cluster. Note You can install OpenShift Virtualization on a single-node cluster, but single-node OpenShift does not support high availability. Live migration requires shared storage. Storage for OpenShift Virtualization must support and use the ReadWriteMany (RWX) access mode. If you plan to use Single Root I/O Virtualization (SR-IOV), ensure that your network interface controllers (NICs) are supported by OpenShift Container Platform. Additional resources Preparing your cluster for OpenShift Virtualization About Single Root I/O Virtualization (SR-IOV) hardware networks Connecting a virtual machine to an SR-IOV network 2.4. Firmware requirements for installing with virtual media The installation program for installer-provisioned OpenShift Container Platform clusters validates the hardware and firmware compatibility with Redfish virtual media. The installation program does not begin installation on a node if the node firmware is not compatible. The following tables list the minimum firmware versions tested and verified to work for installer-provisioned OpenShift Container Platform clusters deployed by using Redfish virtual media. Note Red Hat does not test every combination of firmware, hardware, or other third-party components. For further information about third-party support, see Red Hat third-party support policy . For information about updating the firmware, see the hardware documentation for the nodes or contact the hardware vendor. Table 2.2. Firmware compatibility for HP hardware with Redfish virtual media Model Management Firmware versions 10th Generation iLO5 2.63 or later Table 2.3. Firmware compatibility for Dell hardware with Redfish virtual media Model Management Firmware versions 15th Generation iDRAC 9 v6.10.30.00 14th Generation iDRAC 9 v6.10.30.00 13th Generation iDRAC 8 v2.75.75.75 or later Additional resources Unable to discover new bare metal hosts using the BMC 2.5. Network requirements Installer-provisioned installation of OpenShift Container Platform involves multiple network requirements. First, installer-provisioned installation involves an optional non-routable provisioning network for provisioning the operating system on each bare-metal node. Second, installer-provisioned installation involves a routable baremetal network. 2.5.1. Ensuring required ports are open Certain ports must be open between cluster nodes for installer-provisioned installations to complete successfully. In certain situations, such as using separate subnets for far edge worker nodes, you must ensure that the nodes in these subnets can communicate with nodes in the other subnets on the following required ports. Table 2.4. Required ports Port Description 67 , 68 When using a provisioning network, cluster nodes access the dnsmasq DHCP server over their provisioning network interfaces using ports 67 and 68 . 69 When using a provisioning network, cluster nodes communicate with the TFTP server on port 69 using their provisioning network interfaces. The TFTP server runs on the bootstrap VM. The bootstrap VM runs on the provisioner node. 80 When not using the image caching option or when using virtual media, the provisioner node must have port 80 open on the baremetal machine network interface to stream the Red Hat Enterprise Linux CoreOS (RHCOS) image from the provisioner node to the cluster nodes. 123 The cluster nodes must access the NTP server on port 123 using the baremetal machine network. 5050 The Ironic Inspector API runs on the control plane nodes and listens on port 5050 . The Inspector API is responsible for hardware introspection, which collects information about the hardware characteristics of the bare-metal nodes. 5051 Port 5050 uses port 5051 as a proxy. 6180 When deploying with virtual media and not using TLS, the provisioner node and the control plane nodes must have port 6180 open on the baremetal machine network interface so that the baseboard management controller (BMC) of the worker nodes can access the RHCOS image. Starting with OpenShift Container Platform 4.13, the default HTTP port is 6180 . 6183 When deploying with virtual media and using TLS, the provisioner node and the control plane nodes must have port 6183 open on the baremetal machine network interface so that the BMC of the worker nodes can access the RHCOS image. 6385 The Ironic API server runs initially on the bootstrap VM and later on the control plane nodes and listens on port 6385 . The Ironic API allows clients to interact with Ironic for bare-metal node provisioning and management, including operations such as enrolling new nodes, managing their power state, deploying images, and cleaning the hardware. 6388 Port 6385 uses port 6388 as a proxy. 8080 When using image caching without TLS, port 8080 must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes. 8083 When using the image caching option with TLS, port 8083 must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes. 9999 By default, the Ironic Python Agent (IPA) listens on TCP port 9999 for API calls from the Ironic conductor service. Communication between the bare-metal node where IPA is running and the Ironic conductor service uses this port. 2.5.2. Increase the network MTU Before deploying OpenShift Container Platform, increase the network maximum transmission unit (MTU) to 1500 or more. If the MTU is lower than 1500, the Ironic image that is used to boot the node might fail to communicate with the Ironic inspector pod, and inspection will fail. If this occurs, installation stops because the nodes are not available for installation. 2.5.3. Configuring NICs OpenShift Container Platform deploys with two networks: provisioning : The provisioning network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. The network interface for the provisioning network on each cluster node must have the BIOS or UEFI configured to PXE boot. The provisioningNetworkInterface configuration setting specifies the provisioning network NIC name on the control plane nodes, which must be identical on the control plane nodes. The bootMACAddress configuration setting provides a means to specify a particular NIC on each node for the provisioning network. The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . baremetal : The baremetal network is a routable network. You can use any NIC to interface with the baremetal network provided the NIC is not configured to use the provisioning network. Important When using a VLAN, each NIC must be on a separate VLAN corresponding to the appropriate network. 2.5.4. DNS requirements Clients access the OpenShift Container Platform cluster nodes over the baremetal network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name. <cluster_name>.<base_domain> For example: test-cluster.example.com OpenShift Container Platform includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS. CoreDNS requires both TCP and UDP connections to the upstream DNS server to function correctly. Ensure the upstream DNS server can receive both TCP and UDP connections from OpenShift Container Platform cluster nodes. In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard ingress API A/AAAA records are used for name resolution and PTR records are used for reverse name resolution. Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records or DHCP to set the hostnames for all the nodes. Installer-provisioned installation includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.5. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. An A/AAAA record and a PTR record identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Routes *.apps.<cluster_name>.<base_domain>. The wildcard A/AAAA record refers to the application ingress load balancer. The application ingress load balancer targets the nodes that run the Ingress Controller pods. The Ingress Controller pods run on the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Tip You can use the dig command to verify DNS resolution. 2.5.5. Dynamic Host Configuration Protocol (DHCP) requirements By default, installer-provisioned installation deploys ironic-dnsmasq with DHCP enabled for the provisioning network. No other DHCP servers should be running on the provisioning network when the provisioningNetwork configuration setting is set to managed , which is the default value. If you have a DHCP server running on the provisioning network, you must set the provisioningNetwork configuration setting to unmanaged in the install-config.yaml file. Network administrators must reserve IP addresses for each node in the OpenShift Container Platform cluster for the baremetal network on an external DHCP server. 2.5.6. Reserving IP addresses for nodes with the DHCP server For the baremetal network, a network administrator must reserve several IP addresses, including: Two unique virtual IP addresses. One virtual IP address for the API endpoint. One virtual IP address for the wildcard ingress endpoint. One IP address for the provisioner node. One IP address for each control plane node. One IP address for each worker node, if applicable. Reserving IP addresses so they become static IP addresses Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To configure static IP addresses with NMState, see "(Optional) Configuring node network interfaces" in the "Setting up the environment for an OpenShift installation" section. Networking between external load balancers and control plane nodes External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes. Important The storage interface requires a DHCP reservation or a static IP. The following table provides an exemplary embodiment of fully qualified domain names. The API and name server addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer. Usage Host Name IP API api.<cluster_name>.<base_domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<base_domain> <ip> Provisioner node provisioner.<cluster_name>.<base_domain> <ip> Control-plane-0 openshift-control-plane-0.<cluster_name>.<base_domain> <ip> Control-plane-1 openshift-control-plane-1.<cluster_name>-.<base_domain> <ip> Control-plane-2 openshift-control-plane-2.<cluster_name>.<base_domain> <ip> Worker-0 openshift-worker-0.<cluster_name>.<base_domain> <ip> Worker-1 openshift-worker-1.<cluster_name>.<base_domain> <ip> Worker-n openshift-worker-n.<cluster_name>.<base_domain> <ip> Note If you do not create DHCP reservations, the installation program requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes. 2.5.7. Provisioner node requirements You must specify the MAC address for the provisioner node in your installation configuration. The bootMacAddress specification is typically associated with PXE network booting. However, the Ironic provisioning service also requires the bootMacAddress specification to identify nodes during the inspection of the cluster, or during node redeployment in the cluster. The provisioner node requires layer 2 connectivity for network booting, DHCP and DNS resolution, and local network communication. The provisioner node requires layer 3 connectivity for virtual media booting. 2.5.8. Network Time Protocol (NTP) Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL/TLS certificates that require validation, which might fail if the date and time between the nodes are not in sync. Important Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail. You can reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes. 2.5.9. Port access for the out-of-band management IP address The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the provisioner node during installation, the out-of-band management IP address must be granted access to port 6180 on the provisioner node and on the OpenShift Container Platform control plane nodes. TLS port 6183 is required for virtual media installation, for example, by using Redfish. 2.6. Configuring nodes Configuring nodes when using the provisioning network Each node in the cluster requires the following configuration for proper installation. Warning A mismatch between nodes will cause an installation failure. While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. In the following table, NIC1 is a non-routable network ( provisioning ) that is only used for the installation of the OpenShift Container Platform cluster. NIC Network VLAN NIC1 provisioning <provisioning_vlan> NIC2 baremetal <baremetal_vlan> The Red Hat Enterprise Linux (RHEL) 9.x installation process on the provisioner node might vary. To install Red Hat Enterprise Linux (RHEL) 9.x using a local Satellite server or a PXE server, PXE-enable NIC2. PXE Boot order NIC1 PXE-enabled provisioning network 1 NIC2 baremetal network. PXE-enabled is optional. 2 Note Ensure PXE is disabled on all other NICs. Configure the control plane and worker nodes as follows: PXE Boot order NIC1 PXE-enabled (provisioning network) 1 Configuring nodes without the provisioning network The installation process requires one NIC: NIC Network VLAN NICx baremetal <baremetal_vlan> NICx is a routable network ( baremetal ) that is used for the installation of the OpenShift Container Platform cluster, and routable to the internet. Important The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . Configuring nodes for Secure Boot manually Secure Boot prevents a node from booting unless it verifies the node is using only trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. Note Red Hat only supports manually configured Secure Boot when deploying with Redfish virtual media. To enable Secure Boot manually, refer to the hardware guide for the node and execute the following: Procedure Boot the node and enter the BIOS menu. Set the node's boot mode to UEFI Enabled . Enable Secure Boot. Important Red Hat does not support Secure Boot with self-generated keys. 2.7. Out-of-band management Nodes typically have an additional NIC used by the baseboard management controllers (BMCs). These BMCs must be accessible from the provisioner node. Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful OpenShift Container Platform installation. The out-of-band management setup is out of scope for this document. Using a separate management network for out-of-band management can enhance performance and improve security. However, using the provisioning network or the bare metal network are valid options. Note The bootstrap VM features a maximum of two network interfaces. If you configure a separate management network for out-of-band management, and you are using a provisioning network, the bootstrap VM requires routing access to the management network through one of the network interfaces. In this scenario, the bootstrap VM can then access three networks: the bare metal network the provisioning network the management network routed through one of the network interfaces 2.8. Required data for installation Prior to the installation of the OpenShift Container Platform cluster, gather the following information from all cluster nodes: Out-of-band management IP Examples Dell (iDRAC) IP HP (iLO) IP Fujitsu (iRMC) IP When using the provisioning network NIC ( provisioning ) MAC address NIC ( baremetal ) MAC address When omitting the provisioning network NIC ( baremetal ) MAC address 2.9. Validation checklist for nodes When using the provisioning network ❏ NIC1 VLAN is configured for the provisioning network. ❏ NIC1 for the provisioning network is PXE-enabled on the provisioner, control plane, and worker nodes. ❏ NIC2 VLAN is configured for the baremetal network. ❏ PXE has been disabled on all other NICs. ❏ DNS is configured with API and Ingress endpoints. ❏ Control plane and worker nodes are configured. ❏ All nodes accessible via out-of-band management. ❏ (Optional) A separate management network has been created. ❏ Required data for installation. When omitting the provisioning network ❏ NIC1 VLAN is configured for the baremetal network. ❏ DNS is configured with API and Ingress endpoints. ❏ Control plane and worker nodes are configured. ❏ All nodes accessible via out-of-band management. ❏ (Optional) A separate management network has been created. ❏ Required data for installation.
[ "<cluster_name>.<base_domain>", "test-cluster.example.com" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-prerequisites
Chapter 9. LDAP Servers
Chapter 9. LDAP Servers LDAP (Lightweight Directory Access Protocol) is a set of open protocols used to access centrally stored information over a network. It is based on the X.500 standard for directory sharing, but is less complex and resource-intensive. For this reason, LDAP is sometimes referred to as " X.500 Lite " . Like X.500, LDAP organizes information in a hierarchical manner using directories. These directories can store a variety of information such as names, addresses, or phone numbers, and can even be used in a manner similar to the Network Information Service ( NIS ), enabling anyone to access their account from any machine on the LDAP enabled network. LDAP is commonly used for centrally managed users and groups, user authentication, or system configuration. It can also serve as a virtual phone directory, allowing users to easily access contact information for other users. Additionally, it can refer a user to other LDAP servers throughout the world, and thus provide an ad-hoc global repository of information. However, it is most frequently used within individual organizations such as universities, government departments, and private companies. 9.1. Red Hat Directory Server Red Hat Directory Server is an LDAP-compliant server that centralizes user identity and application information. It provides an operating system-independent and network-based registry for storing application settings, user profiles, group data, policies, and access control information. Note You require a current Red Hat Directory Server subscription to install and update Directory Server. For further details about setting up and using Directory Server, see: Red Hat Directory Server Installation Guide Red Hat Directory Server Deployment Guide Red Hat Directory Server Administration Guide Red Hat Directory Server Configuration, Command, and File Reference Red Hat Directory Server Performance Tuning Guide
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/ldap_servers
Chapter 5. Running .NET 8.0 applications in containers
Chapter 5. Running .NET 8.0 applications in containers Use the ubi8/dotnet-80-runtime image to run a .NET application inside a Linux container. The following example uses podman. Procedure Create a new MVC project in a directory called mvc_runtime_example : Publish the project: Run your image: Verification steps View the application running in the container:
[ "dotnet new mvc --output mvc_runtime_example", "dotnet publish mvc_runtime_example -f net8.0 /p:PublishProfile=DefaultContainer /p:ContainerBaseImage=registry.access.redhat.com/ubi8/dotnet-80-runtime:latest", "podman run --rm -p8080:8080 mvc_runtime_example", "xdg-open http://127.0.0.1:8080" ]
https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_8/running-apps-in-containers-using-dotnet_getting-started-with-dotnet-on-rhel-8
17.7. The Default Configuration
17.7. The Default Configuration When the libvirtd daemon ( libvirtd ) is first installed, it contains an initial virtual network switch configuration in NAT mode. This configuration is used so that installed guests can communicate to the external network, through the host physical machine. The following image demonstrates this default configuration for libvirtd : Figure 17.7. Default libvirt network configuration Note A virtual network can be restricted to a specific physical interface. This may be useful on a physical system that has several interfaces (for example, eth0 , eth1 and eth2 ). This is only useful in routed and NAT modes, and can be defined in the dev=<interface> option, or in virt-manager when creating a new virtual network.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-the_default_configuration
Appendix F. Replication High Availability Configuration Elements
Appendix F. Replication High Availability Configuration Elements The following tables list the valid ha-policy configuration elements when using a replication HA policy. Table F.1. Configuration Elements Available when Using Replication High Availability Name Description check-for-live-server Applies only to brokers configured as master brokers. Specifies whether the original master broker checks the cluster for another live broker using its own server ID when starting up. Set to true to fail back to the original master broker and avoid a "split brain" situation in which two brokers become live at the same time. The default value of this property is false . cluster-name Name of the cluster configuration to use for replication. This setting is only necessary if you configure multiple cluster connections. If configured, the the cluster configuration with this name will be used when connecting to the cluster. If unset, the first cluster connection defined in the configuration is used. group-name If set, backup brokers will only pair with live brokers that have a matching value for group-name . initial-replication-sync-timeout The amount of time the replicating broker will wait upon completion of the initial replication process for the replica to acknowledge that it has received all the necessary data. The default value of this property is 30,000 milliseconds. Note During this interval, any other journal-related operations are blocked. max-saved-replicated-journals-size Applies to backup brokers only. Specifies how many backup journal files the backup broker retains. Once this value has been reached, the broker makes space for each new backup journal file by deleting the oldest journal file. The default value of this property is 2 . allow-failback Applies to backup brokers only. Determines whether the backup broker resumes its original role when another broker such as the live broker makes a request to take its place. The default value of this property is true . restart-backup Applies to backup brokers only. Determines whether the backup broker automatically restarts after it fails back to another broker. The default value of this property is true . Revised on 2022-03-15 13:56:26 UTC
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/configuring_amq_broker/replication_elements
14.2. Finding Entries Using the Web Console
14.2. Finding Entries Using the Web Console You can use the LDAP Browser in the web console to search for entries in the Directory Server databases. Directory Server searches for entries based on the attribute-value pairs stored in the entries, not based on the attributes used in the distinguished names (DN) of these entries. For example, if an entry has a DN of uid=user_name,ou=People,dc=example,dc=com , then a search for dc=example matches the entry only when dc:example attribute exists in this entry. Prerequisites You are logged in to the Directory Server web console. You have root permissions. Procedure In the web console, navigate to LDAP Browser Search . Expand and select the search criteria to filter entries: Table 14.1. Default Index Attributes Search Parameter Description Search base Specifies the starting point of the search. It is a distinguished name (DN) that currently exists in the database. Note The Search tabs opens with pre-defined search base, when you open an entry details in the Tree View or Table View , click on the Options menu (⫶) and select Search . Search Scope Select Subtree to search entries in the whole subtree starting from the search base and including all child entries. Select One Level to search entries starting from the search base and including only the first level of child entries. Select Base to search for attribute values only in the entry specified as the search base. Size Limit Set the maximum number of entries to return from a search operation. Time Limit Set the time in seconds the search engine can look for entries. Show Locking Toggle the switch to on to see the lock status of the found entries. Search Attributes Select attributes that take part in the search. You can choose from the predefined attributes and add custom ones. Type the attribute value in the search text field and press Enter . Note Directory Server records all search requests to the access log file, which you can view at Monitoring Logging Access Log . Optional: To further refine your search, use search filters in the Filter tab to search for entries.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/finding_entries_using_ldap_browser
Chapter 5. Additional information
Chapter 5. Additional information After adding more files to the fapolicyd trust file, use the following command to update the fapolicyd database: After removing entries from the fapolicyd trust file, you have to restart fapolicyd instead:
[ "fapolicyd-cli --update", "systemctl restart fapolicyd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_fapolicyd_to_allow_only_sap_hana_executables/ref_add_info_configuring-fapolicyd
Chapter 6. UserOAuthAccessToken [oauth.openshift.io/v1]
Chapter 6. UserOAuthAccessToken [oauth.openshift.io/v1] Description UserOAuthAccessToken is a virtual resource to mirror OAuthAccessTokens to the user the access token was issued for Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources authorizeToken string AuthorizeToken contains the token that authorized this token clientName string ClientName references the client that created this token. expiresIn integer ExpiresIn is the seconds from CreationTime before this token expires. inactivityTimeoutSeconds integer InactivityTimeoutSeconds is the value in seconds, from the CreationTimestamp, after which this token can no longer be used. The value is automatically incremented when the token is used. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata redirectURI string RedirectURI is the redirection associated with the token. refreshToken string RefreshToken is the value by which this token can be renewed. Can be blank. scopes array (string) Scopes is an array of the requested scopes. userName string UserName is the user name associated with this token userUID string UserUID is the unique UID associated with this token 6.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/useroauthaccesstokens GET : list or watch objects of kind UserOAuthAccessToken /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens GET : watch individual changes to a list of UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/useroauthaccesstokens/{name} DELETE : delete an UserOAuthAccessToken GET : read the specified UserOAuthAccessToken /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens/{name} GET : watch changes to an object of kind UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/oauth.openshift.io/v1/useroauthaccesstokens HTTP method GET Description list or watch objects of kind UserOAuthAccessToken Table 6.1. HTTP responses HTTP code Reponse body 200 - OK UserOAuthAccessTokenList schema 401 - Unauthorized Empty 6.2.2. /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens HTTP method GET Description watch individual changes to a list of UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead. Table 6.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/oauth.openshift.io/v1/useroauthaccesstokens/{name} Table 6.3. Global path parameters Parameter Type Description name string name of the UserOAuthAccessToken HTTP method DELETE Description delete an UserOAuthAccessToken Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.5. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified UserOAuthAccessToken Table 6.6. HTTP responses HTTP code Reponse body 200 - OK UserOAuthAccessToken schema 401 - Unauthorized Empty 6.2.4. /apis/oauth.openshift.io/v1/watch/useroauthaccesstokens/{name} Table 6.7. Global path parameters Parameter Type Description name string name of the UserOAuthAccessToken HTTP method GET Description watch changes to an object of kind UserOAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.8. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/oauth_apis/useroauthaccesstoken-oauth-openshift-io-v1
Appendix A. Using your Red Hat subscription
Appendix A. Using your Red Hat subscription Red Hat Connectivity Link is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Managing your subscriptions Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. In the menu bar, click Subscriptions to view and manage your subscriptions. Revised on 2025-03-12 11:42:16 UTC
null
https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/configuring_and_deploying_gateway_policies_with_connectivity_link/using_your_subscription
Chapter 9. Other notable changes
Chapter 9. Other notable changes 9.1. Javascript engine available by default on the classpath In the version, when Keycloak was used on Java 17 with Javascript providers (Script authenticator, Javascript authorization policy or Script protocol mappers for OIDC and SAML clients), it was needed to copy the javascript engine to the distribution. This is no longer needed as Nashorn javascript engine is available in Red Hat build of Keycloak server by default. When you deploy script providers, it is recommended to not copy Nashorn's script engine and its dependencies into the Red Hat build of Keycloak distribution. 9.2. Renamed Keycloak Admin client artifacts After the upgrade to Jakarta EE, artifacts for Keycloak Admin clients were renamed to more descriptive names with consideration for long-term maintainability. However, two separate Keycloak Admin clients still exist: one with Jakarta EE and the other with Java EE support. The org.keycloak:keycloak-admin-client-jakarta artifact is no longer released. The default one for the Keycloak Admin client with Jakarta EE support is org.keycloak:keycloak-admin-client (since version 24.0.0). The new artifact with Java EE support is org.keycloak:keycloak-admin-client-jee . 9.2.1. Jakarta EE support The new artifact with Java EE support is org.keycloak:keycloak-admin-client-jee . Jakarta EE support Before migration: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jakarta</artifactId> <version>18.0.0.redhat-00001</version> </dependency> After migration: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>22.0.0.redhat-00001</version> </dependency> 9.2.2. Java EE support Before migration: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>18.0.0.redhat-00001</version> </dependency> After migration: <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jee</artifactId> <version>22.0.0.redhat-00001</version> </dependency> 9.3. Never expires option removed from client advanced settings combos The option Never expires is now removed from all the combos of the Advanced Settings client tab. This option was misleading because the different lifespans or idle timeouts were never infinite, but limited by the general user session or realm values. Therefore, this option is removed in favor of the other two remaining options: Inherits from the realm settings (the client uses general realm timeouts) and Expires in (the value is overridden for the client). Internally the Never expires was represented by -1 . Now that value is shown with a warning in the Admin Console and cannot be set directly by the administrator. 9.4. New email rules and limits validation Red Hat build of Keycloak has new rules on email creation to allow ASCII characters during the email creation. Also, a new limit of 64 characters on exists on local email part (before the @). So, a new parameter --spi-user-profile-declarative-user-profile-max-email-local-part-length is added to set max email local part length taking backwards compatibility into consideration. The default value is 64. kc.sh start --spi-user-profile-declarative-user-profile-max-email-local-part-length=100
[ "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jakarta</artifactId> <version>18.0.0.redhat-00001</version> </dependency>", "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>22.0.0.redhat-00001</version> </dependency>", "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>18.0.0.redhat-00001</version> </dependency>", "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client-jee</artifactId> <version>22.0.0.redhat-00001</version> </dependency>", "kc.sh start --spi-user-profile-declarative-user-profile-max-email-local-part-length=100" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/migration_guide/other-changes
Chapter 3. AlertService
Chapter 3. AlertService 3.1. CountAlerts GET /v1/alertscount CountAlerts counts how many alerts match the get request. 3.1.1. Description 3.1.2. Parameters 3.1.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 3.1.3. Return Type V1CountAlertsResponse 3.1.4. Content Type application/json 3.1.5. Responses Table 3.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1CountAlertsResponse 0 An unexpected error response. RuntimeError 3.1.6. Samples 3.1.7. Common object reference 3.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 3.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 3.1.7.3. V1CountAlertsResponse Field Name Required Nullable Type Description Format count Integer int32 3.2. DeleteAlerts DELETE /v1/alerts 3.2.1. Description 3.2.2. Parameters 3.2.2.1. Query Parameters Name Description Required Default Pattern query.query - null query.pagination.limit - null query.pagination.offset - null query.pagination.sortOption.field - null query.pagination.sortOption.reversed - null query.pagination.sortOption.aggregateBy.aggrFunc - UNSET query.pagination.sortOption.aggregateBy.distinct - null confirm - null 3.2.3. Return Type V1DeleteAlertsResponse 3.2.4. Content Type application/json 3.2.5. Responses Table 3.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1DeleteAlertsResponse 0 An unexpected error response. RuntimeError 3.2.6. Samples 3.2.7. Common object reference 3.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 3.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 3.2.7.3. V1DeleteAlertsResponse Field Name Required Nullable Type Description Format numDeleted Long int64 dryRun Boolean 3.3. ListAlerts GET /v1/alerts List returns the slim list version of the alerts. 3.3.1. Description 3.3.2. Parameters 3.3.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 3.3.3. Return Type V1ListAlertsResponse 3.3.4. Content Type application/json 3.3.5. Responses Table 3.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListAlertsResponse 0 An unexpected error response. RuntimeError 3.3.6. Samples 3.3.7. Common object reference 3.3.7.1. ListAlertCommonEntityInfo Fields common to all entities that an alert might belong to. Field Name Required Nullable Type Description Format clusterName String namespace String clusterId String namespaceId String resourceType StorageListAlertResourceType DEPLOYMENT, SECRETS, CONFIGMAPS, CLUSTER_ROLES, CLUSTER_ROLE_BINDINGS, NETWORK_POLICIES, SECURITY_CONTEXT_CONSTRAINTS, EGRESS_FIREWALLS, 3.3.7.2. ListAlertPolicyDevFields Field Name Required Nullable Type Description Format SORTName String 3.3.7.3. ListAlertResourceEntity Field Name Required Nullable Type Description Format name String 3.3.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.3.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 3.3.7.5. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 3.3.7.6. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 3.3.7.7. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 3.3.7.8. StorageListAlert Field Name Required Nullable Type Description Format id String lifecycleStage StorageLifecycleStage DEPLOY, BUILD, RUNTIME, time Date date-time policy StorageListAlertPolicy state StorageViolationState ACTIVE, SNOOZED, RESOLVED, ATTEMPTED, enforcementCount Integer int32 enforcementAction StorageEnforcementAction UNSET_ENFORCEMENT, SCALE_TO_ZERO_ENFORCEMENT, UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT, KILL_POD_ENFORCEMENT, FAIL_BUILD_ENFORCEMENT, FAIL_KUBE_REQUEST_ENFORCEMENT, FAIL_DEPLOYMENT_CREATE_ENFORCEMENT, FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT, commonEntityInfo ListAlertCommonEntityInfo deployment StorageListAlertDeployment resource ListAlertResourceEntity 3.3.7.9. StorageListAlertDeployment Field Name Required Nullable Type Description Format id String name String clusterName String This field is deprecated and can be found in CommonEntityInfo. It will be removed from here in a future release. namespace String This field is deprecated and can be found in CommonEntityInfo. It will be removed from here in a future release. clusterId String This field is deprecated and can be found in CommonEntityInfo. It will be removed from here in a future release. inactive Boolean namespaceId String This field is deprecated and can be found in CommonEntityInfo. It will be removed from here in a future release. 3.3.7.10. StorageListAlertPolicy Field Name Required Nullable Type Description Format id String name String severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, description String categories List of string developerInternalFields ListAlertPolicyDevFields 3.3.7.11. StorageListAlertResourceType Enum Values DEPLOYMENT SECRETS CONFIGMAPS CLUSTER_ROLES CLUSTER_ROLE_BINDINGS NETWORK_POLICIES SECURITY_CONTEXT_CONSTRAINTS EGRESS_FIREWALLS 3.3.7.12. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 3.3.7.13. StorageViolationState Enum Values ACTIVE SNOOZED RESOLVED ATTEMPTED 3.3.7.14. V1ListAlertsResponse Field Name Required Nullable Type Description Format alerts List of StorageListAlert 3.4. GetAlert GET /v1/alerts/{id} GetAlert returns the alert given its id. 3.4.1. Description 3.4.2. Parameters 3.4.2.1. Path Parameters Name Description Required Default Pattern id X null 3.4.3. Return Type StorageAlert 3.4.4. Content Type application/json 3.4.5. Responses Table 3.4. HTTP Response Codes Code Message Datatype 200 A successful response. StorageAlert 0 An unexpected error response. RuntimeError 3.4.6. Samples 3.4.7. Common object reference 3.4.7.1. AlertDeploymentContainer Field Name Required Nullable Type Description Format image StorageContainerImage name String 3.4.7.2. AlertEnforcement Field Name Required Nullable Type Description Format action StorageEnforcementAction UNSET_ENFORCEMENT, SCALE_TO_ZERO_ENFORCEMENT, UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT, KILL_POD_ENFORCEMENT, FAIL_BUILD_ENFORCEMENT, FAIL_KUBE_REQUEST_ENFORCEMENT, FAIL_DEPLOYMENT_CREATE_ENFORCEMENT, FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT, message String 3.4.7.3. AlertProcessViolation Field Name Required Nullable Type Description Format message String processes List of StorageProcessIndicator 3.4.7.4. AlertResourceResourceType Enum Values UNKNOWN SECRETS CONFIGMAPS CLUSTER_ROLES CLUSTER_ROLE_BINDINGS NETWORK_POLICIES SECURITY_CONTEXT_CONSTRAINTS EGRESS_FIREWALLS 3.4.7.5. AlertViolation Field Name Required Nullable Type Description Format message String keyValueAttrs ViolationKeyValueAttrs networkFlowInfo ViolationNetworkFlowInfo type AlertViolationType GENERIC, K8S_EVENT, NETWORK_FLOW, NETWORK_POLICY, time Date Indicates violation time. This field differs from top-level field 'time' which represents last time the alert occurred in case of multiple occurrences of the policy alert. As of 55.0, this field is set only for kubernetes event violations, but may not be limited to it in future. date-time 3.4.7.6. AlertViolationType Enum Values GENERIC K8S_EVENT NETWORK_FLOW NETWORK_POLICY 3.4.7.7. KeyValueAttrsKeyValueAttr Field Name Required Nullable Type Description Format key String value String 3.4.7.8. NetworkFlowInfoEntity Field Name Required Nullable Type Description Format name String entityType StorageNetworkEntityInfoType UNKNOWN_TYPE, DEPLOYMENT, INTERNET, LISTEN_ENDPOINT, EXTERNAL_SOURCE, INTERNAL_ENTITIES, deploymentNamespace String deploymentType String port Integer int32 3.4.7.9. PolicyMitreAttackVectors Field Name Required Nullable Type Description Format tactic String techniques List of string 3.4.7.10. ProcessSignalLineageInfo Field Name Required Nullable Type Description Format parentUid Long int64 parentExecFilePath String 3.4.7.11. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.4.7.11.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 3.4.7.12. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 3.4.7.13. StorageAlert Field Name Required Nullable Type Description Format id String policy StoragePolicy lifecycleStage StorageLifecycleStage DEPLOY, BUILD, RUNTIME, clusterId String clusterName String namespace String namespaceId String deployment StorageAlertDeployment image StorageContainerImage resource StorageAlertResource violations List of AlertViolation For run-time phase alert, a maximum of 40 violations are retained. processViolation AlertProcessViolation enforcement AlertEnforcement time Date date-time firstOccurred Date date-time resolvedAt Date The time at which the alert was resolved. Only set if ViolationState is RESOLVED. date-time state StorageViolationState ACTIVE, SNOOZED, RESOLVED, ATTEMPTED, snoozeTill Date date-time 3.4.7.14. StorageAlertDeployment Field Name Required Nullable Type Description Format id String name String type String namespace String namespaceId String labels Map of string clusterId String clusterName String containers List of AlertDeploymentContainer annotations Map of string inactive Boolean 3.4.7.15. StorageAlertResource Field Name Required Nullable Type Description Format resourceType AlertResourceResourceType UNKNOWN, SECRETS, CONFIGMAPS, CLUSTER_ROLES, CLUSTER_ROLE_BINDINGS, NETWORK_POLICIES, SECURITY_CONTEXT_CONSTRAINTS, EGRESS_FIREWALLS, name String clusterId String clusterName String namespace String namespaceId String 3.4.7.16. StorageBooleanOperator Enum Values OR AND 3.4.7.17. StorageContainerImage Field Name Required Nullable Type Description Format id String name StorageImageName notPullable Boolean isClusterLocal Boolean 3.4.7.18. StorageEnforcementAction FAIL_KUBE_REQUEST_ENFORCEMENT: FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_CREATE_ENFORCEMENT: FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT: FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. Enum Values UNSET_ENFORCEMENT SCALE_TO_ZERO_ENFORCEMENT UNSATISFIABLE_NODE_CONSTRAINT_ENFORCEMENT KILL_POD_ENFORCEMENT FAIL_BUILD_ENFORCEMENT FAIL_KUBE_REQUEST_ENFORCEMENT FAIL_DEPLOYMENT_CREATE_ENFORCEMENT FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT 3.4.7.19. StorageEventSource Enum Values NOT_APPLICABLE DEPLOYMENT_EVENT AUDIT_LOG_EVENT 3.4.7.20. StorageExclusion Field Name Required Nullable Type Description Format name String deployment StorageExclusionDeployment image StorageExclusionImage expiration Date date-time 3.4.7.21. StorageExclusionDeployment Field Name Required Nullable Type Description Format name String scope StorageScope 3.4.7.22. StorageExclusionImage Field Name Required Nullable Type Description Format name String 3.4.7.23. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 3.4.7.24. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 3.4.7.25. StorageLifecycleStage Enum Values DEPLOY BUILD RUNTIME 3.4.7.26. StorageNetworkEntityInfoType INTERNAL_ENTITIES: INTERNAL_ENTITIES is for grouping all internal entities under a single network graph node Enum Values UNKNOWN_TYPE DEPLOYMENT INTERNET LISTEN_ENDPOINT EXTERNAL_SOURCE INTERNAL_ENTITIES 3.4.7.27. StoragePolicy Field Name Required Nullable Type Description Format id String name String description String rationale String remediation String disabled Boolean categories List of string lifecycleStages List of StorageLifecycleStage eventSource StorageEventSource NOT_APPLICABLE, DEPLOYMENT_EVENT, AUDIT_LOG_EVENT, exclusions List of StorageExclusion scope List of StorageScope severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, enforcementActions List of StorageEnforcementAction FAIL_DEPLOYMENT_CREATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object creates/updates. FAIL_KUBE_REQUEST_ENFORCEMENT takes effect only if admission control webhook is enabled to listen on exec and port-forward events. FAIL_DEPLOYMENT_UPDATE_ENFORCEMENT takes effect only if admission control webhook is configured to enforce on object updates. notifiers List of string lastUpdated Date date-time SORTName String For internal use only. SORTLifecycleStage String For internal use only. SORTEnforcement Boolean For internal use only. policyVersion String policySections List of StoragePolicySection mitreAttackVectors List of PolicyMitreAttackVectors criteriaLocked Boolean Read-only field. If true, the policy's criteria fields are rendered read-only. mitreVectorsLocked Boolean Read-only field. If true, the policy's MITRE ATT&CK fields are rendered read-only. isDefault Boolean Read-only field. Indicates the policy is a default policy if true and a custom policy if false. 3.4.7.28. StoragePolicyGroup Field Name Required Nullable Type Description Format fieldName String booleanOperator StorageBooleanOperator OR, AND, negate Boolean values List of StoragePolicyValue 3.4.7.29. StoragePolicySection Field Name Required Nullable Type Description Format sectionName String policyGroups List of StoragePolicyGroup 3.4.7.30. StoragePolicyValue Field Name Required Nullable Type Description Format value String 3.4.7.31. StorageProcessIndicator Field Name Required Nullable Type Description Format id String deploymentId String containerName String podId String podUid String signal StorageProcessSignal clusterId String namespace String containerStartTime Date date-time imageId String 3.4.7.32. StorageProcessSignal Field Name Required Nullable Type Description Format id String A unique UUID for identifying the message We have this here instead of at the top level because we want to have each message to be self contained. containerId String time Date date-time name String args String execFilePath String pid Long int64 uid Long int64 gid Long int64 lineage List of string scraped Boolean lineageInfo List of ProcessSignalLineageInfo 3.4.7.33. StorageScope Field Name Required Nullable Type Description Format cluster String namespace String label StorageScopeLabel 3.4.7.34. StorageScopeLabel Field Name Required Nullable Type Description Format key String value String 3.4.7.35. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 3.4.7.36. StorageViolationState Enum Values ACTIVE SNOOZED RESOLVED ATTEMPTED 3.4.7.37. ViolationKeyValueAttrs Field Name Required Nullable Type Description Format attrs List of KeyValueAttrsKeyValueAttr 3.4.7.38. ViolationNetworkFlowInfo Field Name Required Nullable Type Description Format protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, source NetworkFlowInfoEntity destination NetworkFlowInfoEntity 3.5. ResolveAlert PATCH /v1/alerts/{id}/resolve ResolveAlert marks the given alert (by ID) as resolved. 3.5.1. Description 3.5.2. Parameters 3.5.2.1. Path Parameters Name Description Required Default Pattern id X null 3.5.2.2. Body Parameter Name Description Required Default Pattern body V1ResolveAlertRequest X 3.5.3. Return Type Object 3.5.4. Content Type application/json 3.5.5. Responses Table 3.5. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 3.5.6. Samples 3.5.7. Common object reference 3.5.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.5.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 3.5.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 3.5.7.3. V1ResolveAlertRequest Field Name Required Nullable Type Description Format id String whitelist Boolean addToBaseline Boolean 3.6. SnoozeAlert PATCH /v1/alerts/{id}/snooze SnoozeAlert is deprecated. 3.6.1. Description 3.6.2. Parameters 3.6.2.1. Path Parameters Name Description Required Default Pattern id X null 3.6.2.2. Body Parameter Name Description Required Default Pattern body V1SnoozeAlertRequest X 3.6.3. Return Type Object 3.6.4. Content Type application/json 3.6.5. Responses Table 3.6. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 3.6.6. Samples 3.6.7. Common object reference 3.6.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.6.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 3.6.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 3.6.7.3. V1SnoozeAlertRequest Field Name Required Nullable Type Description Format id String snoozeTill Date date-time 3.7. ResolveAlerts PATCH /v1/alerts/resolve ResolveAlertsByQuery marks alerts matching search query as resolved. 3.7.1. Description 3.7.2. Parameters 3.7.2.1. Body Parameter Name Description Required Default Pattern body V1ResolveAlertsRequest X 3.7.3. Return Type Object 3.7.4. Content Type application/json 3.7.5. Responses Table 3.7. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 3.7.6. Samples 3.7.7. Common object reference 3.7.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.7.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 3.7.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 3.7.7.3. V1ResolveAlertsRequest Field Name Required Nullable Type Description Format query String 3.8. GetAlertsCounts GET /v1/alerts/summary/counts GetAlertsCounts returns the number of alerts in the requested cluster or category. 3.8.1. Description 3.8.2. Parameters 3.8.2.1. Query Parameters Name Description Required Default Pattern request.query - null request.pagination.limit - null request.pagination.offset - null request.pagination.sortOption.field - null request.pagination.sortOption.reversed - null request.pagination.sortOption.aggregateBy.aggrFunc - UNSET request.pagination.sortOption.aggregateBy.distinct - null groupBy - UNSET 3.8.3. Return Type V1GetAlertsCountsResponse 3.8.4. Content Type application/json 3.8.5. Responses Table 3.8. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetAlertsCountsResponse 0 An unexpected error response. RuntimeError 3.8.6. Samples 3.8.7. Common object reference 3.8.7.1. AlertGroupAlertCounts Field Name Required Nullable Type Description Format severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, count String int64 3.8.7.2. GetAlertsCountsResponseAlertGroup Field Name Required Nullable Type Description Format group String counts List of AlertGroupAlertCounts 3.8.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.8.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 3.8.7.4. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 3.8.7.5. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 3.8.7.6. V1GetAlertsCountsResponse Field Name Required Nullable Type Description Format groups List of GetAlertsCountsResponseAlertGroup 3.9. GetAlertsGroup GET /v1/alerts/summary/groups GetAlertsGroup returns alerts grouped by policy. 3.9.1. Description 3.9.2. Parameters 3.9.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 3.9.3. Return Type V1GetAlertsGroupResponse 3.9.4. Content Type application/json 3.9.5. Responses Table 3.9. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetAlertsGroupResponse 0 An unexpected error response. RuntimeError 3.9.6. Samples 3.9.7. Common object reference 3.9.7.1. ListAlertPolicyDevFields Field Name Required Nullable Type Description Format SORTName String 3.9.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.9.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 3.9.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 3.9.7.4. StorageListAlertPolicy Field Name Required Nullable Type Description Format id String name String severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, description String categories List of string developerInternalFields ListAlertPolicyDevFields 3.9.7.5. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 3.9.7.6. V1GetAlertsGroupResponse Field Name Required Nullable Type Description Format alertsByPolicies List of V1GetAlertsGroupResponsePolicyGroup 3.9.7.7. V1GetAlertsGroupResponsePolicyGroup Field Name Required Nullable Type Description Format policy StorageListAlertPolicy numAlerts String int64 3.10. GetAlertTimeseries GET /v1/alerts/summary/timeseries GetAlertTimeseries returns the alerts sorted by time. 3.10.1. Description 3.10.2. Parameters 3.10.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 3.10.3. Return Type V1GetAlertTimeseriesResponse 3.10.4. Content Type application/json 3.10.5. Responses Table 3.10. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetAlertTimeseriesResponse 0 An unexpected error response. RuntimeError 3.10.6. Samples 3.10.7. Common object reference 3.10.7.1. ClusterAlertsAlertEvents Field Name Required Nullable Type Description Format severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, events List of V1AlertEvent 3.10.7.2. GetAlertTimeseriesResponseClusterAlerts Field Name Required Nullable Type Description Format cluster String severities List of ClusterAlertsAlertEvents 3.10.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 3.10.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 3.10.7.4. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 3.10.7.5. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 3.10.7.6. V1AlertEvent Field Name Required Nullable Type Description Format time String int64 type V1Type CREATED, REMOVED, id String 3.10.7.7. V1GetAlertTimeseriesResponse Field Name Required Nullable Type Description Format clusters List of GetAlertTimeseriesResponseClusterAlerts 3.10.7.8. V1Type Enum Values CREATED REMOVED
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "A special ListAlert-only enumeration of all resource types. Unlike Alert.Resource.ResourceType this also includes deployment as a type This must be kept in sync with Alert.Resource.ResourceType (excluding the deployment value)", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Represents an alert on a kubernetes resource other than a deployment (configmaps, secrets, etc.)", "Next tag: 12", "Next available tag: 13", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/alertservice
Chapter 33. FHIR
Chapter 33. FHIR Both producer and consumer are supported The FHIR component integrates with the HAPI-FHIR library which is an open-source implementation of the FHIR (Fast Healthcare Interoperability Resources) specification in Java. 33.1. Dependencies When using fhir with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-fhir-starter</artifactId> </dependency> 33.2. URI Format The FHIR Component uses the following URI format: Endpoint prefix can be one of: capabilities create delete history load-page meta operation patch read search transaction update validate 33.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 33.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 33.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 33.4. Component Options The FHIR component supports 27 options, which are listed below. Name Description Default Type encoding (common) Encoding to use for all request. Enum values: JSON XML String fhirVersion (common) The FHIR Version to use. Enum values: DSTU2 DSTU2_HL7ORG DSTU2_1 DSTU3 R4 R5 R4 String log (common) Will log every requests and responses. false boolean prettyPrint (common) Pretty print all request. false boolean serverUrl (common) The FHIR server base URL. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean client (advanced) To use the custom client. IGenericClient clientFactory (advanced) To use the custom client factory. IRestfulClientFactory compress (advanced) Compresses outgoing (POST/PUT) contents to the GZIP format. false boolean configuration (advanced) To use the shared configuration. FhirConfiguration connectionTimeout (advanced) How long to try and establish the initial TCP connection (in ms). 10000 Integer deferModelScanning (advanced) When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed. false boolean fhirContext (advanced) FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly. FhirContext forceConformanceCheck (advanced) Force conformance check. false boolean sessionCookie (advanced) HTTP session cookie to add to every request. String socketTimeout (advanced) How long to block for individual read/write operations (in ms). 10000 Integer summary (advanced) Request that the server modify the response using the _summary param. Enum values: COUNT TEXT DATA TRUE FALSE String validationMode (advanced) When should Camel validate the FHIR Server's conformance statement. Enum values: NEVER ONCE ONCE String proxyHost (proxy) The proxy host. String proxyPassword (proxy) The proxy password. String proxyPort (proxy) The proxy port. Integer proxyUser (proxy) The proxy username. String accessToken (security) OAuth access token. String password (security) Username to use for basic authentication. String username (security) Username to use for basic authentication. String 33.5. Endpoint Options The FHIR endpoint is configured using URI syntax: with the following path and query parameters: 33.5.1. Path Parameters (2 parameters) Name Description Default Type apiName (common) Required What kind of operation to perform. Enum values: CAPABILITIES CREATE DELETE HISTORY LOAD_PAGE META OPERATION PATCH READ SEARCH TRANSACTION UPDATE VALIDATE FhirApiName methodName (common) Required What sub operation to use for the selected operation. String 33.5.2. Query Parameters (44 parameters) Name Description Default Type encoding (common) Encoding to use for all request. Enum values: JSON XML String fhirVersion (common) The FHIR Version to use. Enum values: DSTU2 DSTU2_HL7ORG DSTU2_1 DSTU3 R4 R5 R4 String inBody (common) Sets the name of a parameter to be passed in the exchange In Body. String log (common) Will log every requests and responses. false boolean prettyPrint (common) Pretty print all request. false boolean serverUrl (common) The FHIR server base URL. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean client (advanced) To use the custom client. IGenericClient clientFactory (advanced) To use the custom client factory. IRestfulClientFactory compress (advanced) Compresses outgoing (POST/PUT) contents to the GZIP format. false boolean connectionTimeout (advanced) How long to try and establish the initial TCP connection (in ms). 10000 Integer deferModelScanning (advanced) When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed. false boolean fhirContext (advanced) FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly. FhirContext forceConformanceCheck (advanced) Force conformance check. false boolean sessionCookie (advanced) HTTP session cookie to add to every request. String socketTimeout (advanced) How long to block for individual read/write operations (in ms). 10000 Integer summary (advanced) Request that the server modify the response using the _summary param. Enum values: COUNT TEXT DATA TRUE FALSE String validationMode (advanced) When should Camel validate the FHIR Server's conformance statement. Enum values: NEVER ONCE ONCE String proxyHost (proxy) The proxy host. String proxyPassword (proxy) The proxy password. String proxyPort (proxy) The proxy port. Integer proxyUser (proxy) The proxy username. String backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean accessToken (security) OAuth access token. String password (security) Username to use for basic authentication. String username (security) Username to use for basic authentication. String 33.6. API Parameters (13 APIs) The @FHIR endpoint is an API based component and has additional parameters based on which API name and API method is used. The API name and API method is located in the endpoint URI as the apiName/methodName path parameters: There are 13 API names as listed in the table below: API Name Type Description capabilities Both API to Fetch the capability statement for the server create Both API for the create operation, which creates a new resource instance on the server delete Both API for the delete operation, which performs a logical delete on a server resource history Both API for the history method load-page Both API that Loads the / bundle of resources from a paged set, using the link specified in the link type= tag within the atom bundle meta Both API for the meta operations, which can be used to get, add and remove tags and other Meta elements from a resource or across the server operation Both API for extended FHIR operations patch Both API for the patch operation, which performs a logical patch on a server resource read Both API method for read operations search Both API to search for resources matching a given set of criteria transaction Both API for sending a transaction (collection of resources) to the server to be executed as a single unit update Both API for the update operation, which performs a logical delete on a server resource validate Both API for validating resources Each API is documented in the following sections to come. 33.6.1. API: capabilities Both producer and consumer are supported The capabilities API is defined in the syntax as follows: The method is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description ofType Retrieve the conformance statement using the given model type 33.6.1.1. Method ofType Signatures: org.hl7.fhir.instance.model.api.IBaseConformance ofType(Class<org.hl7.fhir.instance.model.api.IBaseConformance> type, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/ofType API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map type The model type Class In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.6.2. API: create Both producer and consumer are supported The create API is defined in the syntax as follows: The 1 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description resource Creates a IBaseResource on the server 33.6.2.1. Method resource Signatures: ca.uhn.fhir.rest.api.MethodOutcome resource(String resourceAsString, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resource API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map preferReturn Add a Prefer header to the request, which requests that the server include or suppress the resource body as a part of the result. If a resource is returned by the server it will be parsed an accessible to the client via MethodOutcome#getResource() , may be null PreferReturnEnum resource The resource to create IBaseResource resourceAsString The resource to create String url The search URL to use. The format of this URL should be of the form ResourceTypeParameters, for example: Patientname=Smith&identifier=13.2.4.11.4%7C847366, may be null String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.6.3. API: delete Both producer and consumer are supported The delete API is defined in the syntax as follows: The 3 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description resource Deletes the given resource resourceById Deletes the resource by resource type e resourceConditionalByUrl Specifies that the delete should be performed as a conditional delete against a given search URL 33.6.3.1. Method resource Signatures: org.hl7.fhir.instance.model.api.IBaseOperationOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resource API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map resource The IBaseResource to delete IBaseResource 33.6.3.2. Method resourceById Signatures: org.hl7.fhir.instance.model.api.IBaseOperationOutcome resourceById(String type, String stringId, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseOperationOutcome resourceById(org.hl7.fhir.instance.model.api.IIdType id, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resourceById API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The IIdType referencing the resource IIdType stringId It's id String type The resource type e.g Patient String 33.6.3.3. Method resourceConditionalByUrl Signatures: org.hl7.fhir.instance.model.api.IBaseOperationOutcome resourceConditionalByUrl(String url, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resourceConditionalByUrl API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map url The search URL to use. The format of this URL should be of the form ResourceTypeParameters, for example: Patientname=Smith&identifier=13.2.4.11.4%7C847366 String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.6.4. API: history Both producer and consumer are supported The history API is defined in the syntax as follows: The 3 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description onInstance Perform the operation across all versions of a specific resource (by ID and type) on the server onServer Perform the operation across all versions of all resources of all types on the server onType Perform the operation across all versions of all resources of the given type on the server 33.6.4.1. Method onInstance Signatures: org.hl7.fhir.instance.model.api.IBaseBundle onInstance(org.hl7.fhir.instance.model.api.IIdType id, Class<org.hl7.fhir.instance.model.api.IBaseBundle> returnType, Integer count, java.util.Date cutoff, org.hl7.fhir.instance.model.api.IPrimitiveType<java.util.Date> iCutoff, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onInstance API method has the parameters listed in the table below: Parameter Description Type count Request that the server return only up to theCount number of resources, may be NULL Integer cutoff Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL Date extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map iCutoff Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL IPrimitiveType id The IIdType which must be populated with both a resource type and a resource ID at IIdType returnType Request that the method return a Bundle resource (such as ca.uhn.fhir.model.dstu2.resource.Bundle). Use this method if you are accessing a DSTU2 server. Class 33.6.4.2. Method onServer Signatures: org.hl7.fhir.instance.model.api.IBaseBundle onServer(Class<org.hl7.fhir.instance.model.api.IBaseBundle> returnType, Integer count, java.util.Date cutoff, org.hl7.fhir.instance.model.api.IPrimitiveType<java.util.Date> iCutoff, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onServer API method has the parameters listed in the table below: Parameter Description Type count Request that the server return only up to theCount number of resources, may be NULL Integer cutoff Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL Date extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map iCutoff Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL IPrimitiveType returnType Request that the method return a Bundle resource (such as ca.uhn.fhir.model.dstu2.resource.Bundle). Use this method if you are accessing a DSTU2 server. Class 33.6.4.3. Method onType Signatures: org.hl7.fhir.instance.model.api.IBaseBundle onType(Class<org.hl7.fhir.instance.model.api.IBaseResource> resourceType, Class<org.hl7.fhir.instance.model.api.IBaseBundle> returnType, Integer count, java.util.Date cutoff, org.hl7.fhir.instance.model.api.IPrimitiveType<java.util.Date> iCutoff, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onType API method has the parameters listed in the table below: Parameter Description Type count Request that the server return only up to theCount number of resources, may be NULL Integer cutoff Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL Date extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map iCutoff Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL IPrimitiveType resourceType The resource type to search for Class returnType Request that the method return a Bundle resource (such as ca.uhn.fhir.model.dstu2.resource.Bundle). Use this method if you are accessing a DSTU2 server. Class In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.6.5. API: load-page Both producer and consumer are supported The load-page API is defined in the syntax as follows: The 3 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description byUrl Load a page of results using the given URL and bundle type and return a DSTU1 Atom bundle Load the page of results using the link with relation in the bundle Load the page of results using the link with relation prev in the bundle 33.6.5.1. Method byUrl Signatures: org.hl7.fhir.instance.model.api.IBaseBundle byUrl(String url, Class<org.hl7.fhir.instance.model.api.IBaseBundle> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/byUrl API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map returnType The return type Class url The search url String 33.6.5.2. Method Signatures: org.hl7.fhir.instance.model.api.IBaseBundle (org.hl7.fhir.instance.model.api.IBaseBundle bundle, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/ API method has the parameters listed in the table below: Parameter Description Type bundle The IBaseBundle IBaseBundle extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map 33.6.5.3. Method Signatures: org.hl7.fhir.instance.model.api.IBaseBundle (org.hl7.fhir.instance.model.api.IBaseBundle bundle, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/ API method has the parameters listed in the table below: Parameter Description Type bundle The IBaseBundle IBaseBundle extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.6.6. API: meta Both producer and consumer are supported The meta API is defined in the syntax as follows: The 5 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description add Add the elements in the given metadata to the already existing set (do not remove any) delete Delete the elements in the given metadata from the given id getFromResource Fetch the current metadata from a specific resource getFromServer Fetch the current metadata from the whole Server getFromType Fetch the current metadata from a specific type 33.6.6.1. Method add Signatures: org.hl7.fhir.instance.model.api.IBaseMetaType add(org.hl7.fhir.instance.model.api.IBaseMetaType meta, org.hl7.fhir.instance.model.api.IIdType id, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/add API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The id IIdType meta The IBaseMetaType class IBaseMetaType 33.6.6.2. Method delete Signatures: org.hl7.fhir.instance.model.api.IBaseMetaType delete(org.hl7.fhir.instance.model.api.IBaseMetaType meta, org.hl7.fhir.instance.model.api.IIdType id, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/delete API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The id IIdType meta The IBaseMetaType class IBaseMetaType 33.6.6.3. Method getFromResource Signatures: org.hl7.fhir.instance.model.api.IBaseMetaType getFromResource(Class<org.hl7.fhir.instance.model.api.IBaseMetaType> metaType, org.hl7.fhir.instance.model.api.IIdType id, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/getFromResource API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The id IIdType metaType The IBaseMetaType class Class 33.6.6.4. Method getFromServer Signatures: org.hl7.fhir.instance.model.api.IBaseMetaType getFromServer(Class<org.hl7.fhir.instance.model.api.IBaseMetaType> metaType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/getFromServer API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map metaType The type of the meta datatype for the given FHIR model version (should be MetaDt.class or MetaType.class) Class 33.6.6.5. Method getFromType Signatures: org.hl7.fhir.instance.model.api.IBaseMetaType getFromType(Class<org.hl7.fhir.instance.model.api.IBaseMetaType> metaType, String resourceType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/getFromType API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map metaType The IBaseMetaType class Class resourceType The resource type e.g Patient String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.6.7. API: operation Both producer and consumer are supported The operation API is defined in the syntax as follows: The 5 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description onInstance Perform the operation across all versions of a specific resource (by ID and type) on the server onInstanceVersion This operation operates on a specific version of a resource onServer Perform the operation across all versions of all resources of all types on the server onType Perform the operation across all versions of all resources of the given type on the server processMessage This operation is called USDprocess-message as defined by the FHIR specification 33.6.7.1. Method onInstance Signatures: org.hl7.fhir.instance.model.api.IBaseResource onInstance(org.hl7.fhir.instance.model.api.IIdType id, String name, org.hl7.fhir.instance.model.api.IBaseParameters parameters, Class<org.hl7.fhir.instance.model.api.IBaseParameters> outputParameterType, boolean useHttpGet, Class<org.hl7.fhir.instance.model.api.IBaseResource> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onInstance API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id Resource (version will be stripped) IIdType name Operation name String outputParameterType The type to use for the output parameters (this should be set to Parameters.class drawn from the version of the FHIR structures you are using), may be NULL Class parameters The parameters to use as input. May also be null if the operation does not require any input parameters. IBaseParameters returnType If this operation returns a single resource body as its return type instead of a Parameters resource, use this method to specify that resource type. This is useful for certain operations (e.g. Patient/NNN/USDeverything) which return a bundle instead of a Parameters resource, may be NULL Class useHttpGet Use HTTP GET verb Boolean 33.6.7.2. Method onInstanceVersion Signatures: org.hl7.fhir.instance.model.api.IBaseResource onInstanceVersion(org.hl7.fhir.instance.model.api.IIdType id, String name, org.hl7.fhir.instance.model.api.IBaseParameters parameters, Class<org.hl7.fhir.instance.model.api.IBaseParameters> outputParameterType, boolean useHttpGet, Class<org.hl7.fhir.instance.model.api.IBaseResource> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onInstanceVersion API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id Resource version IIdType name Operation name String outputParameterType The type to use for the output parameters (this should be set to Parameters.class drawn from the version of the FHIR structures you are using), may be NULL Class parameters The parameters to use as input. May also be null if the operation does not require any input parameters. IBaseParameters returnType If this operation returns a single resource body as its return type instead of a Parameters resource, use this method to specify that resource type. This is useful for certain operations (e.g. Patient/NNN/USDeverything) which return a bundle instead of a Parameters resource, may be NULL Class useHttpGet Use HTTP GET verb Boolean 33.6.7.3. Method onServer Signatures: org.hl7.fhir.instance.model.api.IBaseResource onServer(String name, org.hl7.fhir.instance.model.api.IBaseParameters parameters, Class<org.hl7.fhir.instance.model.api.IBaseParameters> outputParameterType, boolean useHttpGet, Class<org.hl7.fhir.instance.model.api.IBaseResource> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onServer API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map name Operation name String outputParameterType The type to use for the output parameters (this should be set to Parameters.class drawn from the version of the FHIR structures you are using), may be NULL Class parameters The parameters to use as input. May also be null if the operation does not require any input parameters. IBaseParameters returnType If this operation returns a single resource body as its return type instead of a Parameters resource, use this method to specify that resource type. This is useful for certain operations (e.g. Patient/NNN/USDeverything) which return a bundle instead of a Parameters resource, may be NULL Class useHttpGet Use HTTP GET verb Boolean 33.6.7.4. Method onType Signatures: org.hl7.fhir.instance.model.api.IBaseResource onType(Class<org.hl7.fhir.instance.model.api.IBaseResource> resourceType, String name, org.hl7.fhir.instance.model.api.IBaseParameters parameters, Class<org.hl7.fhir.instance.model.api.IBaseParameters> outputParameterType, boolean useHttpGet, Class<org.hl7.fhir.instance.model.api.IBaseResource> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/onType API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map name Operation name String outputParameterType The type to use for the output parameters (this should be set to Parameters.class drawn from the version of the FHIR structures you are using), may be NULL Class parameters The parameters to use as input. May also be null if the operation does not require any input parameters. IBaseParameters resourceType The resource type to operate on Class returnType If this operation returns a single resource body as its return type instead of a Parameters resource, use this method to specify that resource type. This is useful for certain operations (e.g. Patient/NNN/USDeverything) which return a bundle instead of a Parameters resource, may be NULL Class useHttpGet Use HTTP GET verb Boolean 33.6.7.5. Method processMessage Signatures: org.hl7.fhir.instance.model.api.IBaseBundle processMessage(String respondToUri, org.hl7.fhir.instance.model.api.IBaseBundle msgBundle, boolean asynchronous, Class<org.hl7.fhir.instance.model.api.IBaseBundle> responseClass, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/processMessage API method has the parameters listed in the table below: Parameter Description Type asynchronous Whether to process the message asynchronously or synchronously, defaults to synchronous. Boolean extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map msgBundle Set the Message Bundle to POST to the messaging server IBaseBundle respondToUri An optional query parameter indicating that responses from the receiving server should be sent to this URI, may be NULL String responseClass The response class Class In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.6.8. API: patch Both producer and consumer are supported The patch API is defined in the syntax as follows: The 2 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description patchById Applies the patch to the given resource ID patchByUrl Specifies that the update should be performed as a conditional create against a given search URL 33.6.8.1. Method patchById Signatures: ca.uhn.fhir.rest.api.MethodOutcome patchById(String patchBody, String stringId, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome patchById(String patchBody, org.hl7.fhir.instance.model.api.IIdType id, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/patchById API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The resource ID to patch IIdType patchBody The body of the patch document serialized in either XML or JSON which conforms to String preferReturn Add a Prefer header to the request, which requests that the server include or suppress the resource body as a part of the result. If a resource is returned by the server it will be parsed an accessible to the client via MethodOutcome#getResource() PreferReturnEnum stringId The resource ID to patch String 33.6.8.2. Method patchByUrl Signatures: ca.uhn.fhir.rest.api.MethodOutcome patchByUrl(String patchBody, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/patchByUrl API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map patchBody The body of the patch document serialized in either XML or JSON which conforms to String preferReturn Add a Prefer header to the request, which requests that the server include or suppress the resource body as a part of the result. If a resource is returned by the server it will be parsed an accessible to the client via MethodOutcome#getResource() PreferReturnEnum url The search URL to use. The format of this URL should be of the form ResourceTypeParameters, for example: Patientname=Smith&identifier=13.2.4.11.4%7C847366 String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.6.9. API: read Both producer and consumer are supported The read API is defined in the syntax as follows: The 2 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description resourceById Reads a IBaseResource on the server by id resourceByUrl Reads a IBaseResource on the server by url 33.6.9.1. Method resourceById Signatures: org.hl7.fhir.instance.model.api.IBaseResource resourceById(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, Long longId, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceById(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, String stringId, String version, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceById(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, org.hl7.fhir.instance.model.api.IIdType id, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceById(String resourceClass, Long longId, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceById(String resourceClass, String stringId, String ifVersionMatches, String version, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceById(String resourceClass, org.hl7.fhir.instance.model.api.IIdType id, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resourceById API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The IIdType referencing the resource IIdType ifVersionMatches A version to match against the newest version on the server String longId The resource ID Long resource The resource to read (e.g. Patient) Class resourceClass The resource to read (e.g. Patient) String returnNull Return null if version matches Boolean returnResource Return the resource if version matches IBaseResource stringId The resource ID String throwError Throw error if the version matches Boolean version The resource version String 33.6.9.2. Method resourceByUrl Signatures: org.hl7.fhir.instance.model.api.IBaseResource resourceByUrl(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, String url, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceByUrl(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, org.hl7.fhir.instance.model.api.IIdType iUrl, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceByUrl(String resourceClass, String url, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseResource resourceByUrl(String resourceClass, org.hl7.fhir.instance.model.api.IIdType iUrl, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resourceByUrl API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map iUrl The IIdType referencing the resource by absolute url IIdType ifVersionMatches A version to match against the newest version on the server String resource The resource to read (e.g. Patient) Class resourceClass The resource to read (e.g. Patient.class) String returnNull Return null if version matches Boolean returnResource Return the resource if version matches IBaseResource throwError Throw error if the version matches Boolean url Referencing the resource by absolute url String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.6.10. API: search Both producer and consumer are supported The search API is defined in the syntax as follows: The 1 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description searchByUrl Perform a search directly by URL 33.6.10.1. Method searchByUrl Signatures: org.hl7.fhir.instance.model.api.IBaseBundle searchByUrl(String url, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/searchByUrl API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map url The URL to search for. Note that this URL may be complete (e.g. ) in which case the client's base URL will be ignored. Or it can be relative (e.g. Patientname=foo) in which case the client's base URL will be used. String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.6.11. API: transaction Both producer and consumer are supported The transaction API is defined in the syntax as follows: The 2 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description withBundle Use the given raw text (should be a Bundle resource) as the transaction input withResources Use a list of resources as the transaction input 33.6.11.1. Method withBundle Signatures: String withBundle(String stringBundle, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); org.hl7.fhir.instance.model.api.IBaseBundle withBundle(org.hl7.fhir.instance.model.api.IBaseBundle bundle, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/withBundle API method has the parameters listed in the table below: Parameter Description Type bundle Bundle to use in the transaction IBaseBundle extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map stringBundle Bundle to use in the transaction String 33.6.11.2. Method withResources Signatures: java.util.List<org.hl7.fhir.instance.model.api.IBaseResource> withResources(java.util.List<org.hl7.fhir.instance.model.api.IBaseResource> resources, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/withResources API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map resources Resources to use in the transaction List In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.6.12. API: update Both producer and consumer are supported The update API is defined in the syntax as follows: The 2 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description resource Updates a IBaseResource on the server by id resourceBySearchUrl Updates a IBaseResource on the server by search url 33.6.12.1. Method resource Signatures: ca.uhn.fhir.rest.api.MethodOutcome resource(String resourceAsString, String stringId, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome resource(String resourceAsString, org.hl7.fhir.instance.model.api.IIdType id, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, String stringId, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, org.hl7.fhir.instance.model.api.IIdType id, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resource API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map id The IIdType referencing the resource IIdType preferReturn Whether the server include or suppress the resource body as a part of the result PreferReturnEnum resource The resource to update (e.g. Patient) IBaseResource resourceAsString The resource body to update String stringId The ID referencing the resource String 33.6.12.2. Method resourceBySearchUrl Signatures: ca.uhn.fhir.rest.api.MethodOutcome resourceBySearchUrl(String resourceAsString, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome resourceBySearchUrl(org.hl7.fhir.instance.model.api.IBaseResource resource, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resourceBySearchUrl API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map preferReturn Whether the server include or suppress the resource body as a part of the result PreferReturnEnum resource The resource to update (e.g. Patient) IBaseResource resourceAsString The resource body to update String url Specifies that the update should be performed as a conditional create against a given search URL String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.6.13. API: validate Both producer and consumer are supported The validate API is defined in the syntax as follows: The 1 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name) Method Description resource Validates the resource 33.6.13.1. Method resource Signatures: ca.uhn.fhir.rest.api.MethodOutcome resource(String resourceAsString, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); ca.uhn.fhir.rest.api.MethodOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters); The fhir/resource API method has the parameters listed in the table below: Parameter Description Type extraParameters See ExtraParameters for a full list of parameters that can be passed, may be NULL Map resource The IBaseResource to validate IBaseResource resourceAsString Raw resource to validate String In addition to the parameters above, the fhir API can also use any of the Query Parameters . Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter . The inBody parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere would override a CamelFhir.myParameterNameHere header. 33.7. Spring Boot Auto-Configuration The component supports 56 options, which are listed below. Name Description Default Type camel.component.fhir.access-token OAuth access token. String camel.component.fhir.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.fhir.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.fhir.client To use the custom client. The option is a ca.uhn.fhir.rest.client.api.IGenericClient type. IGenericClient camel.component.fhir.client-factory To use the custom client factory. The option is a ca.uhn.fhir.rest.client.api.IRestfulClientFactory type. IRestfulClientFactory camel.component.fhir.compress Compresses outgoing (POST/PUT) contents to the GZIP format. false Boolean camel.component.fhir.configuration To use the shared configuration. The option is a org.apache.camel.component.fhir.FhirConfiguration type. FhirConfiguration camel.component.fhir.connection-timeout How long to try and establish the initial TCP connection (in ms). 10000 Integer camel.component.fhir.defer-model-scanning When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed. false Boolean camel.component.fhir.enabled Whether to enable auto configuration of the fhir component. This is enabled by default. Boolean camel.component.fhir.encoding Encoding to use for all request. String camel.component.fhir.fhir-context FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly. The option is a ca.uhn.fhir.context.FhirContext type. FhirContext camel.component.fhir.fhir-version The FHIR Version to use. R4 String camel.component.fhir.force-conformance-check Force conformance check. false Boolean camel.component.fhir.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.fhir.log Will log every requests and responses. false Boolean camel.component.fhir.password Username to use for basic authentication. String camel.component.fhir.pretty-print Pretty print all request. false Boolean camel.component.fhir.proxy-host The proxy host. String camel.component.fhir.proxy-password The proxy password. String camel.component.fhir.proxy-port The proxy port. Integer camel.component.fhir.proxy-user The proxy username. String camel.component.fhir.server-url The FHIR server base URL. String camel.component.fhir.session-cookie HTTP session cookie to add to every request. String camel.component.fhir.socket-timeout How long to block for individual read/write operations (in ms). 10000 Integer camel.component.fhir.summary Request that the server modify the response using the _summary param. String camel.component.fhir.username Username to use for basic authentication. String camel.component.fhir.validation-mode When should Camel validate the FHIR Server's conformance statement. ONCE String camel.dataformat.fhirjson.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.fhirjson.dont-encode-elements If provided, specifies the elements which should NOT be encoded. Valid values for this field would include: Patient - Don't encode patient and all its children Patient.name - Don't encode the patient's name Patient.name.family - Don't encode the patient's family name .text - Don't encode the text element on any resource (only the very first position may contain a wildcard) DSTU2 note: Note that values including meta, such as Patient.meta will work for DSTU2 parsers, but values with subelements on meta such as Patient.meta.lastUpdated will only work in DSTU3 mode. Set camel.dataformat.fhirjson.dont-strip-versions-from-references-at-paths If supplied value(s), any resource references at the specified paths will have their resource versions encoded instead of being automatically stripped during the encoding process. This setting has no effect on the parsing process. This method provides a finer-grained level of control than setStripVersionsFromReferences(String) and any paths specified by this method will be encoded even if setStripVersionsFromReferences(String) has been set to true (which is the default). List camel.dataformat.fhirjson.enabled Whether to enable auto configuration of the fhirJson data format. This is enabled by default. Boolean camel.dataformat.fhirjson.encode-elements If provided, specifies the elements which should be encoded, to the exclusion of all others. Valid values for this field would include: Patient - Encode patient and all its children Patient.name - Encode only the patient's name Patient.name.family - Encode only the patient's family name .text - Encode the text element on any resource (only the very first position may contain a wildcard) .(mandatory) - This is a special case which causes any mandatory fields (min 0) to be encoded. Set camel.dataformat.fhirjson.encode-elements-applies-to-child-resources-only If set to true (default is false), the values supplied to setEncodeElements(Set) will not be applied to the root resource (typically a Bundle), but will be applied to any sub-resources contained within it (i.e. search result resources in that bundle). false Boolean camel.dataformat.fhirjson.fhir-version The version of FHIR to use. Possible values are: DSTU2,DSTU2_HL7ORG,DSTU2_1,DSTU3,R4. DSTU3 String camel.dataformat.fhirjson.omit-resource-id If set to true (default is false) the ID of any resources being encoded will not be included in the output. Note that this does not apply to contained resources, only to root resources. In other words, if this is set to true, contained resources will still have local IDs but the outer/containing ID will not have an ID. false Boolean camel.dataformat.fhirjson.override-resource-id-with-bundle-entry-full-url If set to true (which is the default), the Bundle.entry.fullUrl will override the Bundle.entry.resource's resource id if the fullUrl is defined. This behavior happens when parsing the source data into a Bundle object. Set this to false if this is not the desired behavior (e.g. the client code wishes to perform additional validation checks between the fullUrl and the resource id). false Boolean camel.dataformat.fhirjson.pretty-print Sets the pretty print flag, meaning that the parser will encode resources with human-readable spacing and newlines between elements instead of condensing output as much as possible. false Boolean camel.dataformat.fhirjson.server-base-url Sets the server's base URL used by this parser. If a value is set, resource references will be turned into relative references if they are provided as absolute URLs but have a base matching the given base. String camel.dataformat.fhirjson.strip-versions-from-references If set to true (which is the default), resource references containing a version will have the version removed when the resource is encoded. This is generally good behaviour because in most situations, references from one resource to another should be to the resource by ID, not by ID and version. In some cases though, it may be desirable to preserve the version in resource links. In that case, this value should be set to false. This method provides the ability to globally disable reference encoding. If finer-grained control is needed, use setDontStripVersionsFromReferencesAtPaths(List). false Boolean camel.dataformat.fhirjson.summary-mode If set to true (default is false) only elements marked by the FHIR specification as being summary elements will be included. false Boolean camel.dataformat.fhirjson.suppress-narratives If set to true (default is false), narratives will not be included in the encoded values. false Boolean camel.dataformat.fhirxml.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.fhirxml.dont-encode-elements If provided, specifies the elements which should NOT be encoded. Valid values for this field would include: Patient - Don't encode patient and all its children Patient.name - Don't encode the patient's name Patient.name.family - Don't encode the patient's family name .text - Don't encode the text element on any resource (only the very first position may contain a wildcard) DSTU2 note: Note that values including meta, such as Patient.meta will work for DSTU2 parsers, but values with subelements on meta such as Patient.meta.lastUpdated will only work in DSTU3 mode. Set camel.dataformat.fhirxml.dont-strip-versions-from-references-at-paths If supplied value(s), any resource references at the specified paths will have their resource versions encoded instead of being automatically stripped during the encoding process. This setting has no effect on the parsing process. This method provides a finer-grained level of control than setStripVersionsFromReferences(String) and any paths specified by this method will be encoded even if setStripVersionsFromReferences(String) has been set to true (which is the default). List camel.dataformat.fhirxml.enabled Whether to enable auto configuration of the fhirXml data format. This is enabled by default. Boolean camel.dataformat.fhirxml.encode-elements If provided, specifies the elements which should be encoded, to the exclusion of all others. Valid values for this field would include: Patient - Encode patient and all its children Patient.name - Encode only the patient's name Patient.name.family - Encode only the patient's family name .text - Encode the text element on any resource (only the very first position may contain a wildcard) .(mandatory) - This is a special case which causes any mandatory fields (min 0) to be encoded. Set camel.dataformat.fhirxml.encode-elements-applies-to-child-resources-only If set to true (default is false), the values supplied to setEncodeElements(Set) will not be applied to the root resource (typically a Bundle), but will be applied to any sub-resources contained within it (i.e. search result resources in that bundle). false Boolean camel.dataformat.fhirxml.fhir-version The version of FHIR to use. Possible values are: DSTU2,DSTU2_HL7ORG,DSTU2_1,DSTU3,R4. DSTU3 String camel.dataformat.fhirxml.omit-resource-id If set to true (default is false) the ID of any resources being encoded will not be included in the output. Note that this does not apply to contained resources, only to root resources. In other words, if this is set to true, contained resources will still have local IDs but the outer/containing ID will not have an ID. false Boolean camel.dataformat.fhirxml.override-resource-id-with-bundle-entry-full-url If set to true (which is the default), the Bundle.entry.fullUrl will override the Bundle.entry.resource's resource id if the fullUrl is defined. This behavior happens when parsing the source data into a Bundle object. Set this to false if this is not the desired behavior (e.g. the client code wishes to perform additional validation checks between the fullUrl and the resource id). false Boolean camel.dataformat.fhirxml.pretty-print Sets the pretty print flag, meaning that the parser will encode resources with human-readable spacing and newlines between elements instead of condensing output as much as possible. false Boolean camel.dataformat.fhirxml.server-base-url Sets the server's base URL used by this parser. If a value is set, resource references will be turned into relative references if they are provided as absolute URLs but have a base matching the given base. String camel.dataformat.fhirxml.strip-versions-from-references If set to true (which is the default), resource references containing a version will have the version removed when the resource is encoded. This is generally good behaviour because in most situations, references from one resource to another should be to the resource by ID, not by ID and version. In some cases though, it may be desirable to preserve the version in resource links. In that case, this value should be set to false. This method provides the ability to globally disable reference encoding. If finer-grained control is needed, use setDontStripVersionsFromReferencesAtPaths(List). false Boolean camel.dataformat.fhirxml.summary-mode If set to true (default is false) only elements marked by the FHIR specification as being summary elements will be included. false Boolean camel.dataformat.fhirxml.suppress-narratives If set to true (default is false), narratives will not be included in the encoded values. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-fhir-starter</artifactId> </dependency>", "fhir://endpoint-prefix/endpoint?[options]", "fhir:apiName/methodName", "fhir:apiName/methodName", "fhir:capabilities/methodName?[parameters]", "fhir:create/methodName?[parameters]", "fhir:delete/methodName?[parameters]", "fhir:history/methodName?[parameters]", "fhir:load-page/methodName?[parameters]", "fhir:meta/methodName?[parameters]", "fhir:operation/methodName?[parameters]", "fhir:patch/methodName?[parameters]", "fhir:read/methodName?[parameters]", "fhir:search/methodName?[parameters]", "fhir:transaction/methodName?[parameters]", "fhir:update/methodName?[parameters]", "fhir:validate/methodName?[parameters]" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-fhir-component-starter
Chapter 53. module
Chapter 53. module This chapter describes the commands under the module command. 53.1. module list List module versions Usage: Table 53.1. Command arguments Value Summary -h, --help Show this help message and exit --all Show all modules that have version information Table 53.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 53.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 53.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 53.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack module list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/module
Overview, concepts, and deployment considerations
Overview, concepts, and deployment considerations Red Hat Satellite 6.16 Explore the Satellite architecture and plan Satellite deployment Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/overview_concepts_and_deployment_considerations/index
Chapter 10. Viewing threads
Chapter 10. Viewing threads You can view and monitor the state of threads. Procedure Click the Runtime tab and then the Threads subtab. The Threads page lists active threads and stack trace details for each thread. By default, the thread list shows all threads in descending ID order. To sort the list by increasing ID, click the ID column label. Optionally, filter the list by thread state (for example, Blocked ) or by thread name. To drill down to detailed information for a specific thread, such as the lock class name and full stack trace for that thread, in the Actions column, click More .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_openshift/fuse-console-view-threads-all_fcopenshift
2.2. Install Maven
2.2. Install Maven Maven is a build system for projects that use the Project Object Model (POM). It downloads package dependencies quickly and easily. If you have an infrastructure team providing your Red Hat JBoss Data Virtualization environment, you can skip this procedure. Otherwise, you will need to follow it to install Maven to build your projects. Prerequisites The following software must be installed: An archiving tool for extracting the contents of compressed files. Open JDK. Procedure 2.1. Install Maven Download Maven. Go to http://maven.apache.org/download.cgi . Download the apache-maven-[latest-version] ZIP file. Install and configure Maven. On Red Hat Enterprise Linux Extract the ZIP archive to the directory where you wish to install Maven. Open your .bash_profile file: Add the M2_HOME environment variable to the file: Add the M2 environment variable to the file: Add the variable USDJAVA_HOME/bin to set the path to the correct Java installation. Note Make sure JAVA_HOME is pointing to a valid location. Add the M2 environment variable to the file: Save the file and exit your text editor. Reload your profile: Run the following command to verify that Maven is installed successfully on your machine: On Microsoft Windows Extract the ZIP archive to the directory where you wish to install Maven. The subdirectory apache-maven-[latest-version] is created from the archive. Press Start+Pause|Break . The System Properties dialog box is displayed. Click the Advanced tab and click Environment Variables . Under System Variables, select Path . Click Edit and add the two Maven paths using a semicolon to separate each entry. Add the M2_HOME variable and set the path to C:\path\to\your\Maven . Add the M2 variable and set the value to %M2_HOME%\bin. Update or create the Path environment variable: Add the %M2% variable to allow Maven to be executed from the command line. Add the variable %JAVA_HOME%\bin to set the path to the correct Java installation. Click OK to close all the dialog boxes including the System Properties dialog box. Open Windows command prompt and run the following command to verify that Maven is installed successfully on your machine:
[ "vi ~/.bash_profile", "export M2_HOME=/path/to/your/maven", "export M2=USDM2_HOME/bin", "export PATH=USDM2:USDPATH", "source ~/.bash_profile", "mvn --version", "mvn --version" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/install_maven
Chapter 11. Controlling access to the Admin Console
Chapter 11. Controlling access to the Admin Console Each realm created on the Red Hat build of Keycloak has a dedicated Admin Console from which that realm can be managed. The master realm is a special realm that allows admins to manage more than one realm on the system. This chapter goes over all the scenarios for this. 11.1. Master realm access control The master realm in Red Hat build of Keycloak is a special realm and treated differently than other realms. Users in the Red Hat build of Keycloak master realm can be granted permission to manage zero or more realms that are deployed on the Red Hat build of Keycloak server. When a realm is created, Red Hat build of Keycloak automatically creates various roles that grant fine-grain permissions to access that new realm. Access to The Admin Console and Admin REST endpoints can be controlled by mapping these roles to users in the master realm. It's possible to create multiple superusers, as well as users that can only manage specific realms. 11.1.1. Global roles There are two realm-level roles in the master realm. These are: admin create-realm Users with the admin role are superusers and have full access to manage any realm on the server. Users with the create-realm role are allowed to create new realms. They will be granted full access to any new realm they create. 11.1.2. Realm specific roles Admin users within the master realm can be granted management privileges to one or more other realms in the system. Each realm in Red Hat build of Keycloak is represented by a client in the master realm. The name of the client is <realm name>-realm . These clients each have client-level roles defined which define varying level of access to manage an individual realm. The roles available are: view-realm view-users view-clients view-events manage-realm manage-users create-client manage-clients manage-events view-identity-providers manage-identity-providers impersonation Assign the roles you want to your users and they will only be able to use that specific part of the administration console. Important Admins with the manage-users role will only be able to assign admin roles to users that they themselves have. So, if an admin has the manage-users role but doesn't have the manage-realm role, they will not be able to assign this role. 11.2. Dedicated realm admin consoles Each realm has a dedicated Admin Console that can be accessed by going to the url /admin/{realm-name}/console . Users within that realm can be granted realm management permissions by assigning specific user role mappings. Each realm has a built-in client called realm-management . You can view this client by going to the Clients left menu item of your realm. This client defines client-level roles that specify permissions that can be granted to manage the realm. view-realm view-users view-clients view-events manage-realm manage-users create-client manage-clients manage-events view-identity-providers manage-identity-providers impersonation Assign the roles you want to your users and they will only be able to use that specific part of the administration console.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_administration_guide/admin_permissions
Chapter 3. Benchmarking Data Grid on OpenShift
Chapter 3. Benchmarking Data Grid on OpenShift For Data Grid clusters running on OpenShift, Red Hat recommends using Hyperfoil to measure performance. Hyperfoil is a benchmarking framework that provides accurate performance results for distributed services. 3.1. Benchmarking Data Grid After you set up and configure your deployment, start benchmarking your Data Grid cluster to analyze and measure performance. Benchmarking shows you where limits exist so you can adjust your environment and tune your Data Grid configuration to get the best performance, which means achieving the lowest latency and highest throughput possible. It is worth noting that optimal performance is a continual process, not an ultimate goal. When your benchmark tests show that your Data Grid deployment has reached a desired level of performance, you cannot expect those results to be fixed or always valid. 3.2. Installing Hyperfoil Set up Hyperfoil on Red Hat OpenShift by creating an operator subscription and downloading the Hyperfoil distribution that includes the command line interface (CLI). Procedure Create a Hyperfoil Operator subscription through the OperatorHub in the OpenShift Web Console. Note Hyperfoil Operator is available as a Community Operator. Red Hat does not certify the Hyperfoil Operator and does not provide support for it in combination with Data Grid. When you install the Hyperfoil Operator you are prompted to acknowledge a warning about the community version before you can continue. Download the latest Hyperfoil version from the Hyperfoil release page . Additional resources hyperfoil.io Installing Hyperfoil on OpenShift 3.3. Creating a Hyperfoil Controller Instantiate a Hyperfoil Controller on Red Hat OpenShift so you can upload and run benchmark tests with the Hyperfoil Command Line Interface (CLI). Prerequisites Create a Hyperfoil Operator subscription. Procedure Define hyperfoil-controller.yaml . USD cat > hyperfoil-controller.yaml<<EOF apiVersion: hyperfoil.io/v1alpha2 kind: Hyperfoil metadata: name: hyperfoil spec: version: latest EOF Apply the Hyperfoil Controller. USD oc apply -f hyperfoil-controller.yaml Retrieve the route that connects you to the Hyperfoil CLI. USD oc get routes NAME HOST/PORT hyperfoil hyperfoil-benchmark.apps.example.net 3.4. Running Hyperfoil benchmarks Run benchmark tests with Hyperfoil to collect performance data for Data Grid clusters. Prerequisites Create a Hyperfoil Operator subscription. Instantiate a Hyperfoil Controller on Red Hat OpenShift. Procedure Create a benchmark test. USD cat > hyperfoil-benchmark.yaml<<EOF name: hotrod-benchmark hotrod: # Replace <USERNAME>:<PASSWORD> with your Data Grid credentials. # Replace <SERVICE_HOSTNAME>:<PORT> with the host name and port for Data Grid. - uri: hotrod://<USERNAME>:<PASSWORD>@<SERVICE_HOSTNAME>:<PORT> caches: # Replace <CACHE-NAME> with the name of your Data Grid cache. - <CACHE-NAME> agents: agent-1: agent-2: agent-3: agent-4: agent-5: phases: - rampupPut: increasingRate: duration: 10s initialUsersPerSec: 100 targetUsersPerSec: 200 maxSessions: 300 scenario: &put - putData: - randomInt: cacheKey <- 1 .. 40000 - randomUUID: cacheValue - hotrodRequest: # Replace <CACHE-NAME> with the name of your Data Grid cache. put: <CACHE-NAME> key: key-USD{cacheKey} value: value-USD{cacheValue} - rampupGet: increasingRate: duration: 10s initialUsersPerSec: 100 targetUsersPerSec: 200 maxSessions: 300 scenario: &get - getData: - randomInt: cacheKey <- 1 .. 40000 - hotrodRequest: # Replace <CACHE-NAME> with the name of your Data Grid cache. get: <CACHE-NAME> key: key-USD{cacheKey} - doPut: constantRate: startAfter: rampupPut duration: 5m usersPerSec: 10000 maxSessions: 11000 scenario: *put - doGet: constantRate: startAfter: rampupGet duration: 5m usersPerSec: 40000 maxSessions: 41000 scenario: *get EOF Open the route in any browser to access the Hyperfoil CLI. Upload the benchmark test. Run the upload command. [hyperfoil]USD upload Click Select benchmark file and then navigate to the benchmark test on your file system and upload it. Run the benchmark test. [hyperfoil]USD run hotrod-benchmark Get results of the benchmark test. [hyperfoil]USD stats 3.5. Hyperfoil benchmark results Hyperfoil prints results of the benchmarking run in table format with the stats command. [hyperfoil]USD stats Total stats from run <run_id> PHASE METRIC THROUGHPUT REQUESTS MEAN p50 p90 p99 p99.9 p99.99 TIMEOUTS ERRORS BLOCKED Table 3.1. Column descriptions Column Description Value PHASE For each run, Hyperfoil makes GET requests and PUT requests to the Data Grid cluster in two phases. Either doGet or doPut METRIC During both phases of the run, Hyperfoil collects metrics for each GET and PUT request. Either getData or putData THROUGHPUT Captures the total number of requests per second. Number REQUESTS Captures the total number of operations during each phase of the run. Number MEAN Captures the average time for GET or PUT operations to complete. Time in milliseconds ( ms ) p50 Records the amount of time that it takes for 50 percent of requests to complete. Time in milliseconds ( ms ) p90 Records the amount of time that it takes for 90 percent of requests to complete. Time in milliseconds ( ms ) p99 Records the amount of time that it takes for 99 percent of requests to complete. Time in milliseconds ( ms ) p99.9 Records the amount of time that it takes for 99.9 percent of requests to complete. Time in milliseconds ( ms ) p99.99 Records the amount of time that it takes for 99.99 percent of requests to complete. Time in milliseconds ( ms ) TIMEOUTS Captures the total number of timeouts that occurred for operations during each phase of the run. Number ERRORS Captures the total number of errors that occurred during each phase of the run. Number BLOCKED Captures the total number of operations that were blocked or could not complete. Number
[ "cat > hyperfoil-controller.yaml<<EOF apiVersion: hyperfoil.io/v1alpha2 kind: Hyperfoil metadata: name: hyperfoil spec: version: latest EOF", "oc apply -f hyperfoil-controller.yaml", "oc get routes NAME HOST/PORT hyperfoil hyperfoil-benchmark.apps.example.net", "cat > hyperfoil-benchmark.yaml<<EOF name: hotrod-benchmark hotrod: # Replace <USERNAME>:<PASSWORD> with your Data Grid credentials. # Replace <SERVICE_HOSTNAME>:<PORT> with the host name and port for Data Grid. - uri: hotrod://<USERNAME>:<PASSWORD>@<SERVICE_HOSTNAME>:<PORT> caches: # Replace <CACHE-NAME> with the name of your Data Grid cache. - <CACHE-NAME> agents: agent-1: agent-2: agent-3: agent-4: agent-5: phases: - rampupPut: increasingRate: duration: 10s initialUsersPerSec: 100 targetUsersPerSec: 200 maxSessions: 300 scenario: &put - putData: - randomInt: cacheKey <- 1 .. 40000 - randomUUID: cacheValue - hotrodRequest: # Replace <CACHE-NAME> with the name of your Data Grid cache. put: <CACHE-NAME> key: key-USD{cacheKey} value: value-USD{cacheValue} - rampupGet: increasingRate: duration: 10s initialUsersPerSec: 100 targetUsersPerSec: 200 maxSessions: 300 scenario: &get - getData: - randomInt: cacheKey <- 1 .. 40000 - hotrodRequest: # Replace <CACHE-NAME> with the name of your Data Grid cache. get: <CACHE-NAME> key: key-USD{cacheKey} - doPut: constantRate: startAfter: rampupPut duration: 5m usersPerSec: 10000 maxSessions: 11000 scenario: *put - doGet: constantRate: startAfter: rampupGet duration: 5m usersPerSec: 40000 maxSessions: 41000 scenario: *get EOF", "[hyperfoil]USD upload", "[hyperfoil]USD run hotrod-benchmark", "[hyperfoil]USD stats", "[hyperfoil]USD stats Total stats from run <run_id> PHASE METRIC THROUGHPUT REQUESTS MEAN p50 p90 p99 p99.9 p99.99 TIMEOUTS ERRORS BLOCKED" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_performance_and_sizing_guide/benchmarking-datagrid
Chapter 5. Installing a cluster on vSphere using the Agent-based Installer
Chapter 5. Installing a cluster on vSphere using the Agent-based Installer The Agent-based installation method provides the flexibility to boot your on-premises servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. Agent-based installation is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster with an available release image. 5.1. Additional resources Preparing to install with the Agent-based Installer
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_vsphere/installing-vsphere-agent-based-installer
Chapter 1. Deploying OpenShift Data Foundation using IBM Cloud
Chapter 1. Deploying OpenShift Data Foundation using IBM Cloud You can use Red Hat OpenShift Data Foundation for your workloads that run in IBM Cloud. These workloads might run in Red Hat OpenShift on IBM Cloud clusters that are in the public cloud or in your own IBM Cloud Satellite location. 1.1. Deploying on IBM Cloud public When you create a Red Hat OpenShift on IBM Cloud cluster, you can choose between classic or Virtual Private Cloud (VPC) infrastructure. The Red Hat OpenShift Data Foundation managed cluster add-on supports both infrastructure providers. For classic clusters, the add-on deploys the OpenShift Data Foundation operator with the Local Storage operator. For VPC clusters, the add-on deploys the OpenShift Data Foundation operator which you can use with IBM Cloud Block Storage on VPC storage volumes. Benefits of using the OpenShift Data Foundation managed cluster add-on to install OpenShift Data Foundation instead of installing from OperatorHub Deploy OpenShift Data Foundation from a single CRD instead of manually creating separate resources. For example, in the single CRD that add-on enables, you configure the namespaces, storagecluster, and other resources you need to run OpenShift Data Foundation. Classic - Automatically create PVs using the storage devices that you specify in your OpenShift Data Foundation CRD. VPC - Dynamically provision IBM Cloud Block Storage on VPC storage volumes for your OpenShift Data Foundation storage cluster. Get patch updates automatically for the managed add-on. Update the OpenShift Data Foundation version by modifying a single field in the CRD. Integrate with IBM Cloud Object Storage by providing credentials in the CRD. 1.1.1. Deploying on classic infrastructure in IBM Cloud You can deploy OpenShift Data Foundation on IBM Cloud classic clusters by using the managed cluster add-on to install the OpenShift Data Foundation operator and the Local Storage operator. After you install the OpenShift Data Foundation add-on in your IBM Cloud classic cluster, you create a single custom resource definition that contains your storage device configuration details. For more information, see the Preparing your cluster for OpenShift Data Foundation . 1.1.2. Deploying on VPC infrastructure in IBM Cloud You can deploy OpenShift Data Foundation on IBM Cloud VPC clusters by using the managed cluster add-on to install the OpenShift Data Foundation operator. After you install the OpenShift Data Foundation add-on in your IBM Cloud classic cluster, you create a custom resource definition that contains your worker node information and the IBM Cloud Block Storage for VPC storage classes that you want to use to dynamically provision the OpenShift Data Foundation storage devices. For more information, see the Preparing your cluster OpenShift Data Foundation . 1.2. Deploying on IBM Cloud Satellite With IBM Cloud Satellite, you can create a location with your own infrastructure, such as an on-premises data center or another cloud provider, to bring IBM Cloud services anywhere, including where your data resides. If you store your data by using Red Hat OpenShift Data Foundation, you can use Satellite storage templates to consistently install OpenShift Data Foundation across the clusters in your Satellite location. The templates help you create a Satellite configuration of the various OpenShift Data Foundation parameters, such as the device paths to your local disks or the storage classes that you want to use to dynamically provision volumes. Then, you assign the Satellite configuration to the clusters where you want to install OpenShift Data Foundation. Benefits of using Satellite storage to install OpenShift Data Foundation instead of installing from OperatorHub Create versions your OpenShift Data Foundation configuration to install across multiple clusters or expand your existing configuration. Update OpenShift Data Foundation across multiple clusters consistently. Standardize storage classes that developers can use for persistent storage across clusters. Use a similar deployment pattern for your apps with Satellite Config. Choose from templates for an OpenShift Data Foundation cluster using local disks on your worker nodes or an OpenShift Data Foundation cluster that uses dynamically provisioned volumes from your storage provider. Integrate with IBM Cloud Object Storage by providing credentials in the template. 1.2.1. Using OpenShift Data Foundation with the local storage present on your worker nodes in IBM Cloud Satellite For an OpenShift Data Foundation configuration that uses the local storage present on your worker nodes, you can use a Satellite template to configure your OpenShift Data Foundation configuration. Your cluster must meet certain requirements, such as CPU and memory requirements and size requirements of the available raw unformatted, unmounted disks. Choose a local OpenShift Data Foundation configuration when you want to use the local storage devices already present on your worker nodes, or statically provisioned raw volumes that you attach to your worker nodes. For more information, see the IBM Cloud Satellite local OpenShift Data Foundation storage documentation . 1.2.2. Using OpenShift Data Foundation with remote, dynamically provisioned storage volumes in IBM Cloud Satellite For an OpenShift Data Foundation configuration that uses remote, dynamically provisioned storage volumes from your preferred storage provider, you can use a Satellite storage template to create your storage configuration. In your OpenShift Data Foundation configuration, you specify the storage classes that you want use and the volume sizes that you want to provision. Your cluster must meet certain requirements, such as CPU and memory requirements. Choose the OpenShift Data Foundation-remote storage template when you want to use dynamically provisioned remote volumes from your storage provider in your OpenShift Data Foundation configuration. For more information, see the IBM Cloud Satellite remote OpenShift Data Foundation storage documentation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_ibm_cloud/deploying_openshift_container_storage_using_ibm_cloud_rhodf
Chapter 43. Base64 DataFormat
Chapter 43. Base64 DataFormat Available as of Camel version 2.11 The Base64 data format is used for base64 encoding and decoding. 43.1. Options The Base64 dataformat supports 4 options, which are listed below. Name Default Java Type Description lineLength 76 Integer To specific a maximum line length for the encoded data. By default 76 is used. lineSeparator String The line separators to use. Uses new line characters (CRLF) by default. urlSafe false Boolean Instead of emitting '' and '/' we emit '-' and '_' respectively. urlSafe is only applied to encode operations. Decoding seamlessly handles both modes. Is by default false. contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 43.2. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.dataformat.base64.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.base64.enabled Enable base64 dataformat true Boolean camel.dataformat.base64.line-length To specific a maximum line length for the encoded data. By default 76 is used. 76 Integer camel.dataformat.base64.line-separator The line separators to use. Uses new line characters (CRLF) by default. String camel.dataformat.base64.url-safe Instead of emitting '' and '/' we emit '-' and '_' respectively. urlSafe is only applied to encode operations. Decoding seamlessly handles both modes. Is by default false. false Boolean ND In Spring DSL, you configure the data format using this tag: <camelContext> <dataFormats> <!-- for a newline character (\n), use the HTML entity notation coupled with the ASCII code. --> <base64 lineSeparator="&#10;" id="base64withNewLine" /> <base64 lineLength="64" id="base64withLineLength64" /> </dataFormats> ... </camelContext> Then you can use it later by its reference: <route> <from uri="direct:startEncode" /> <marshal ref="base64withLineLength64" /> <to uri="mock:result" /> </route> Most of the time, you won't need to declare the data format if you use the default options. In that case, you can declare the data format inline as shown below. 43.3. Marshal In this example we marshal the file content to base64 object. from("file://data.bin") .marshal().base64() .to("jms://myqueue"); In Spring DSL: <from uri="file://data.bin"> <marshal> <base64/> </marshal> <to uri="jms://myqueue"/> 43.4. Unmarshal In this example we unmarshal the payload from the JMS queue to a byte[] object, before its processed by the newOrder processor. from("jms://queue/order") .unmarshal().base64() .process("newOrder"); In Spring DSL: <from uri="jms://queue/order"> <unmarshal> <base64/> </unmarshal> <to uri="bean:newOrder"/> 43.5. Dependencies To use Base64 in your Camel routes you need to add a dependency on camel-base64 which implements this data format. If you use Maven you can just add the following to your pom.xml: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-base64</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>
[ "<camelContext> <dataFormats> <!-- for a newline character (\\n), use the HTML entity notation coupled with the ASCII code. --> <base64 lineSeparator=\"&#10;\" id=\"base64withNewLine\" /> <base64 lineLength=\"64\" id=\"base64withLineLength64\" /> </dataFormats> </camelContext>", "<route> <from uri=\"direct:startEncode\" /> <marshal ref=\"base64withLineLength64\" /> <to uri=\"mock:result\" /> </route>", "from(\"file://data.bin\") .marshal().base64() .to(\"jms://myqueue\");", "<from uri=\"file://data.bin\"> <marshal> <base64/> </marshal> <to uri=\"jms://myqueue\"/>", "from(\"jms://queue/order\") .unmarshal().base64() .process(\"newOrder\");", "<from uri=\"jms://queue/order\"> <unmarshal> <base64/> </unmarshal> <to uri=\"bean:newOrder\"/>", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-base64</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/base64-dataformat
4.15. Disabling ptrace()
4.15. Disabling ptrace() The ptrace() system call allows one process to observe and control the execution of another process and change its memory and registers. This call is used primarily by developers during debugging, for example when using the strace utility. When ptrace() is not needed, it can be disabled to improve system security. This can be done by enabling the deny_ptrace Boolean, which denies all processes, even those that are running in unconfined_t domains, from being able to use ptrace() on other processes. The deny_ptrace Boolean is disabled by default. To enable it, run the setsebool -P deny_ptrace on command as the root user: To verify if this Boolean is enabled, use the following command: To disable this Boolean, run the setsebool -P deny_ptrace off command as root: Note The setsebool -P command makes persistent changes. Do not use the -P option if you do not want changes to persist across reboots. This Boolean influences only packages that are part of Red Hat Enterprise Linux. Consequently, third-party packages could still use the ptrace() system call. To list all domains that are allowed to use ptrace() , enter the following command. Note that the setools-console package provides the sesearch utility and that the package is not installed by default.
[ "~]# setsebool -P deny_ptrace on", "~]USD getsebool deny_ptrace deny_ptrace --> on", "~]# setsebool -P deny_ptrace off", "~]# sesearch -A -p ptrace,sys_ptrace -C | grep -v deny_ptrace | cut -d ' ' -f 5" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-working_with_selinux-disable_ptrace
27.4. Using libStorageMgmt
27.4. Using libStorageMgmt To use libStorageMgmt interactively, use the lsmcli tool. The lsmcli tool requires two things to run: A Uniform Resource Identifier (URI) which is used to identify the plug-in to connect to the array and any configurable options the array requires. A valid user name and password for the array. URI has the following form: plugin+ optional-transport :// user-name @ host : port /? query-string-parameters Each plug-in has different requirements for what is needed. Example 27.1. Examples of Different Plug-in Requirements Simulator Plug-in That Requires No User Name or Password sim:// NetApp Plug-in over SSL with User Name root ontap+ssl://root@ filer . company .com/ SMI-S Plug-in over SSL for EMC Array smis+ssl://admin@ provider .com:5989/?namespace=root/emc There are three options to use the URI: Pass the URI as part of the command. Store the URI in an environmental variable. Place the URI in the file ~/.lsmcli , which contains name-value pairs separated by "=". The only currently supported configuration is 'uri'. Determining which URI to use needs to be done in this order. If all three are supplied, only the first one on the command line will be used. Provide the password by specifying the -P option on the command line or by placing it in an environmental variable LSMCLI_PASSWORD . Example 27.2. Example of lsmcli An example for using the command line to create a new volume and making it visible to an initiator. List arrays that are serviced by this connection: List storage pools: Create a volume: Create an access group with an iSCSI initiator in it: Create an access group with an iSCSI intiator in it: Allow the access group visibility to the newly created volume: The design of the library provides for a process separation between the client and the plug-in by means of inter-process communication (IPC). This prevents bugs in the plug-in from crashing the client application. It also provides a means for plug-in writers to write plug-ins with a license of their own choosing. When a client opens the library passing a URI, the client library looks at the URI to determine which plug-in should be used. The plug-ins are technically stand alone applications but they are designed to have a file descriptor passed to them on the command line. The client library then opens the appropriate Unix domain socket which causes the daemon to fork and execute the plug-in. This gives the client library a point to point communication channel with the plug-in. The daemon can be restarted without affecting existing clients. While the client has the library open for that plug-in, the plug-in process is running. After one or more commands are sent and the plug-in is closed, the plug-in process cleans up and then exits. The default behavior of lsmcli is to wait until the operation is complete. Depending on the requested operations, this could potentially could take many hours. To allow a return to normal usage, it is possible to use the -b option on the command line. If the exit code is 0 the command is completed. If the exit code is 7 the command is in progress and a job identifier is written to standard output. The user or script can then take the job ID and query the status of the command as needed by using lsmcli --jobstatus JobID . If the job is now completed, the exit value will be 0 and the results printed to standard output. If the command is still in progress, the return value will be 7 and the percentage complete will be printed to the standard output. Example 27.3. An Asynchronous Example Create a volume passing the -b option so that the command returns immediately. Check the exit value: 7 indicates that the job is still in progress. Check if the job is completed: Check the exit value. 7 indicates the job is still in progress so the standard output is the percentage done or 33% based on the given screen. Wait for sometime and check the exit value again: 0 means success and standard out displays the new volume. For scripting, pass the -t SeparatorCharacters option. This will make it easier to parse the output. Example 27.4. Scripting Examples It is recommended to use the Python library for non-trivial scripting. For more information on lsmcli , see the lsmcli man page or lsmcli --help .
[ "lsmcli -u sim://", "export LSMCLI_URI=sim://", "lsmcli list --type SYSTEMS ID | Name | Status -------+-------------------------------+-------- sim-01 | LSM simulated storage plug-in | OK", "lsmcli list --type POOLS -H ID | Name | Total space | Free space | System ID -----+---------------+----------------------+----------------------+----------- POO2 | Pool 2 | 18446744073709551616 | 18446744073709551616 | sim-01 POO3 | Pool 3 | 18446744073709551616 | 18446744073709551616 | sim-01 POO1 | Pool 1 | 18446744073709551616 | 18446744073709551616 | sim-01 POO4 | lsm_test_aggr | 18446744073709551616 | 18446744073709551616 | sim-01", "lsmcli volume-create --name volume_name --size 20G --pool POO1 -H ID | Name | vpd83 | bs | #blocks | status | -----+-------------+----------------------------------+-----+----------+--------+---- Vol1 | volume_name | F7DDF7CA945C66238F593BC38137BD2F | 512 | 41943040 | OK |", "lsmcli --create-access-group example_ag --id iqn.1994-05.com.domain:01.89bd01 --type ISCSI --system sim-01 ID | Name | Initiator ID |SystemID ---------------------------------+------------+----------------------------------+-------- 782d00c8ac63819d6cca7069282e03a0 | example_ag | iqn.1994-05.com.domain:01.89bd01 |sim-01", "lsmcli access-group-create --name example_ag --init iqn.1994-05.com.domain:01.89bd01 --init-type ISCSI --sys sim-01 ID | Name | Initiator IDs | System ID ---------------------------------+------------+----------------------------------+----------- 782d00c8ac63819d6cca7069282e03a0 | example_ag | iqn.1994-05.com.domain:01.89bd01 | sim-01", "lsmcli access-group-grant --ag 782d00c8ac63819d6cca7069282e03a0 --vol Vol1 --access RW", "lsmcli volume-create --name async_created --size 20G --pool POO1 -b JOB_3", "echo USD? 7", "lsmcli job-status --job JOB_3 33", "echo USD? 7", "lsmcli job-status --job JOB_3 ID | Name | vpd83 | Block Size | -----+---------------+----------------------------------+-------------+----- Vol2 | async_created | 855C9BA51991B0CC122A3791996F6B15 | 512 |", "lsmcli list --type volumes -t# Vol1#volume_name#049167B5D09EC0A173E92A63F6C3EA2A#512#41943040#21474836480#OK#sim-01#POO1 Vol2#async_created#3E771A2E807F68A32FA5E15C235B60CC#512#41943040#21474836480#OK#sim-01#POO1", "lsmcli list --type volumes -t \" | \" Vol1 | volume_name | 049167B5D09EC0A173E92A63F6C3EA2A | 512 | 41943040 | 21474836480 | OK | 21474836480 | sim-01 | POO1 Vol2 | async_created | 3E771A2E807F68A32FA5E15C235B60CC | 512 | 41943040 | 21474836480 | OK | sim-01 | POO1", "lsmcli list --type volumes -s --------------------------------------------- ID | Vol1 Name | volume_name VPD83 | 049167B5D09EC0A173E92A63F6C3EA2A Block Size | 512 #blocks | 41943040 Size | 21474836480 Status | OK System ID | sim-01 Pool ID | POO1 --------------------------------------------- ID | Vol2 Name | async_created VPD83 | 3E771A2E807F68A32FA5E15C235B60CC Block Size | 512 #blocks | 41943040 Size | 21474836480 Status | OK System ID | sim-01 Pool ID | POO1 ---------------------------------------------" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-libstoragemgmt-use
Chapter 4. Deploy AWS Aurora in multiple availability zones
Chapter 4. Deploy AWS Aurora in multiple availability zones This topic describes how to deploy an Aurora regional deployment of a PostgreSQL instance across multiple availability zones to tolerate one or more availability zone failures in a given AWS region. This deployment is intended to be used with the setup described in the Concepts for active-passive deployments chapter. Use this deployment with the other building blocks outlined in the Building blocks active-passive deployments chapter. Note We provide these blueprints to show a minimal functionally complete example with a good baseline performance for regular installations. You would still need to adapt it to your environment and your organization's standards and security best practices. 4.1. Architecture Aurora database clusters consist of multiple Aurora database instances, with one instance designated as the primary writer and all others as backup readers. To ensure high availability in the event of availability zone failures, Aurora allows database instances to be deployed across multiple zones in a single AWS region. In the event of a failure on the availability zone that is hosting the Primary database instance, Aurora automatically heals itself and promotes a reader instance from a non-failed availability zone to be the new writer instance. Figure 4.1. Aurora Multiple Availability Zone Deployment See the AWS Aurora documentation for more details on the semantics provided by Aurora databases. This documentation follows AWS best practices and creates a private Aurora database that is not exposed to the Internet. To access the database from a ROSA cluster, establish a peering connection between the database and the ROSA cluster . 4.2. Procedure The following procedure contains two sections: Creation of an Aurora Multi-AZ database cluster with the name "keycloak-aurora" in eu-west-1. Creation of a peering connection between the ROSA cluster(s) and the Aurora VPC to allow applications deployed on the ROSA clusters to establish connections with the database. 4.2.1. Create Aurora database Cluster Create a VPC for the Aurora cluster Command: aws ec2 create-vpc \ --cidr-block 192.168.0.0/16 \ --tag-specifications "ResourceType=vpc, Tags=[{Key=AuroraCluster,Value=keycloak-aurora}]" \ 1 --region eu-west-1 1 We add an optional tag with the name of the Aurora cluster so that we can easily retrieve the VPC. Output: { "Vpc": { "CidrBlock": "192.168.0.0/16", "DhcpOptionsId": "dopt-0bae7798158bc344f", "State": "pending", "VpcId": "vpc-0b40bd7c59dbe4277", "OwnerId": "606671647913", "InstanceTenancy": "default", "Ipv6CidrBlockAssociationSet": [], "CidrBlockAssociationSet": [ { "AssociationId": "vpc-cidr-assoc-09a02a83059ba5ab6", "CidrBlock": "192.168.0.0/16", "CidrBlockState": { "State": "associated" } } ], "IsDefault": false } } Create a subnet for each availability zone that Aurora will be deployed to, using the VpcId of the newly created VPC. Note The cidr-block range specified for each of the availability zones must not overlap. Zone A Command: aws ec2 create-subnet \ --availability-zone "eu-west-1a" \ --vpc-id vpc-0b40bd7c59dbe4277 \ --cidr-block 192.168.0.0/19 \ --region eu-west-1 Output: { "Subnet": { "AvailabilityZone": "eu-west-1a", "AvailabilityZoneId": "euw1-az3", "AvailableIpAddressCount": 8187, "CidrBlock": "192.168.0.0/19", "DefaultForAz": false, "MapPublicIpOnLaunch": false, "State": "available", "SubnetId": "subnet-0d491a1a798aa878d", "VpcId": "vpc-0b40bd7c59dbe4277", "OwnerId": "606671647913", "AssignIpv6AddressOnCreation": false, "Ipv6CidrBlockAssociationSet": [], "SubnetArn": "arn:aws:ec2:eu-west-1:606671647913:subnet/subnet-0d491a1a798aa878d", "EnableDns64": false, "Ipv6Native": false, "PrivateDnsNameOptionsOnLaunch": { "HostnameType": "ip-name", "EnableResourceNameDnsARecord": false, "EnableResourceNameDnsAAAARecord": false } } } Zone B Command: aws ec2 create-subnet \ --availability-zone "eu-west-1b" \ --vpc-id vpc-0b40bd7c59dbe4277 \ --cidr-block 192.168.32.0/19 \ --region eu-west-1 Output: { "Subnet": { "AvailabilityZone": "eu-west-1b", "AvailabilityZoneId": "euw1-az1", "AvailableIpAddressCount": 8187, "CidrBlock": "192.168.32.0/19", "DefaultForAz": false, "MapPublicIpOnLaunch": false, "State": "available", "SubnetId": "subnet-057181b1e3728530e", "VpcId": "vpc-0b40bd7c59dbe4277", "OwnerId": "606671647913", "AssignIpv6AddressOnCreation": false, "Ipv6CidrBlockAssociationSet": [], "SubnetArn": "arn:aws:ec2:eu-west-1:606671647913:subnet/subnet-057181b1e3728530e", "EnableDns64": false, "Ipv6Native": false, "PrivateDnsNameOptionsOnLaunch": { "HostnameType": "ip-name", "EnableResourceNameDnsARecord": false, "EnableResourceNameDnsAAAARecord": false } } } Obtain the ID of the Aurora VPC route-table Command: aws ec2 describe-route-tables \ --filters Name=vpc-id,Values=vpc-0b40bd7c59dbe4277 \ --region eu-west-1 Output: { "RouteTables": [ { "Associations": [ { "Main": true, "RouteTableAssociationId": "rtbassoc-02dfa06f4c7b4f99a", "RouteTableId": "rtb-04a644ad3cd7de351", "AssociationState": { "State": "associated" } } ], "PropagatingVgws": [], "RouteTableId": "rtb-04a644ad3cd7de351", "Routes": [ { "DestinationCidrBlock": "192.168.0.0/16", "GatewayId": "local", "Origin": "CreateRouteTable", "State": "active" } ], "Tags": [], "VpcId": "vpc-0b40bd7c59dbe4277", "OwnerId": "606671647913" } ] } Associate the Aurora VPC route-table each availability zone's subnet Zone A Command: aws ec2 associate-route-table \ --route-table-id rtb-04a644ad3cd7de351 \ --subnet-id subnet-0d491a1a798aa878d \ --region eu-west-1 Zone B Command: aws ec2 associate-route-table \ --route-table-id rtb-04a644ad3cd7de351 \ --subnet-id subnet-057181b1e3728530e \ --region eu-west-1 Create Aurora Subnet Group Command: aws rds create-db-subnet-group \ --db-subnet-group-name keycloak-aurora-subnet-group \ --db-subnet-group-description "Aurora DB Subnet Group" \ --subnet-ids subnet-0d491a1a798aa878d subnet-057181b1e3728530e \ --region eu-west-1 Create Aurora Security Group Command: aws ec2 create-security-group \ --group-name keycloak-aurora-security-group \ --description "Aurora DB Security Group" \ --vpc-id vpc-0b40bd7c59dbe4277 \ --region eu-west-1 Output: { "GroupId": "sg-0d746cc8ad8d2e63b" } Create the Aurora DB Cluster Command: aws rds create-db-cluster \ --db-cluster-identifier keycloak-aurora \ --database-name keycloak \ --engine aurora-postgresql \ --engine-version USD{properties["aurora-postgresql.version"]} \ --master-username keycloak \ --master-user-password secret99 \ --vpc-security-group-ids sg-0d746cc8ad8d2e63b \ --db-subnet-group-name keycloak-aurora-subnet-group \ --region eu-west-1 Note You should replace the --master-username and --master-user-password values. The values specified here must be used when configuring the Red Hat build of Keycloak database credentials. Output: { "DBCluster": { "AllocatedStorage": 1, "AvailabilityZones": [ "eu-west-1b", "eu-west-1c", "eu-west-1a" ], "BackupRetentionPeriod": 1, "DatabaseName": "keycloak", "DBClusterIdentifier": "keycloak-aurora", "DBClusterParameterGroup": "default.aurora-postgresql15", "DBSubnetGroup": "keycloak-aurora-subnet-group", "Status": "creating", "Endpoint": "keycloak-aurora.cluster-clhthfqe0h8p.eu-west-1.rds.amazonaws.com", "ReaderEndpoint": "keycloak-aurora.cluster-ro-clhthfqe0h8p.eu-west-1.rds.amazonaws.com", "MultiAZ": false, "Engine": "aurora-postgresql", "EngineVersion": "15.3", "Port": 5432, "MasterUsername": "keycloak", "PreferredBackupWindow": "02:21-02:51", "PreferredMaintenanceWindow": "fri:03:34-fri:04:04", "ReadReplicaIdentifiers": [], "DBClusterMembers": [], "VpcSecurityGroups": [ { "VpcSecurityGroupId": "sg-0d746cc8ad8d2e63b", "Status": "active" } ], "HostedZoneId": "Z29XKXDKYMONMX", "StorageEncrypted": false, "DbClusterResourceId": "cluster-IBWXUWQYM3MS5BH557ZJ6ZQU4I", "DBClusterArn": "arn:aws:rds:eu-west-1:606671647913:cluster:keycloak-aurora", "AssociatedRoles": [], "IAMDatabaseAuthenticationEnabled": false, "ClusterCreateTime": "2023-11-01T10:40:45.964000+00:00", "EngineMode": "provisioned", "DeletionProtection": false, "HttpEndpointEnabled": false, "CopyTagsToSnapshot": false, "CrossAccountClone": false, "DomainMemberships": [], "TagList": [], "AutoMinorVersionUpgrade": true, "NetworkType": "IPV4" } } Create Aurora DB instances Create Zone A Writer instance Command: aws rds create-db-instance \ --db-cluster-identifier keycloak-aurora \ --db-instance-identifier "keycloak-aurora-instance-1" \ --db-instance-class db.t4g.large \ --engine aurora-postgresql \ --region eu-west-1 Create Zone B Reader instance Command: aws rds create-db-instance \ --db-cluster-identifier keycloak-aurora \ --db-instance-identifier "keycloak-aurora-instance-2" \ --db-instance-class db.t4g.large \ --engine aurora-postgresql \ --region eu-west-1 Wait for all Writer and Reader instances to be ready Command: aws rds wait db-instance-available --db-instance-identifier keycloak-aurora-instance-1 --region eu-west-1 aws rds wait db-instance-available --db-instance-identifier keycloak-aurora-instance-2 --region eu-west-1 Obtain the Writer endpoint URL for use by Keycloak Command: aws rds describe-db-clusters \ --db-cluster-identifier keycloak-aurora \ --query 'DBClusters[*].Endpoint' \ --region eu-west-1 \ --output text Output: [ "keycloak-aurora.cluster-clhthfqe0h8p.eu-west-1.rds.amazonaws.com" ] 4.2.2. Establish Peering Connections with ROSA clusters Perform these steps once for each ROSA cluster that contains a Red Hat build of Keycloak deployment. Retrieve the Aurora VPC Command: aws ec2 describe-vpcs \ --filters "Name=tag:AuroraCluster,Values=keycloak-aurora" \ --query 'Vpcs[*].VpcId' \ --region eu-west-1 \ --output text Output: Retrieve the ROSA cluster VPC Login to the ROSA cluster using oc Retrieve the ROSA VPC Command: NODE=USD(oc get nodes --selector=node-role.kubernetes.io/worker -o jsonpath='{.items[0].metadata.name}') aws ec2 describe-instances \ --filters "Name=private-dns-name,Values=USD{NODE}" \ --query 'Reservations[0].Instances[0].VpcId' \ --region eu-west-1 \ --output text Output: Create Peering Connection Command: aws ec2 create-vpc-peering-connection \ --vpc-id vpc-0b721449398429559 \ 1 --peer-vpc-id vpc-0b40bd7c59dbe4277 \ 2 --peer-region eu-west-1 \ --region eu-west-1 1 ROSA cluster VPC 2 Aurora VPC Output: { "VpcPeeringConnection": { "AccepterVpcInfo": { "OwnerId": "606671647913", "VpcId": "vpc-0b40bd7c59dbe4277", "Region": "eu-west-1" }, "ExpirationTime": "2023-11-08T13:26:30+00:00", "RequesterVpcInfo": { "CidrBlock": "10.0.17.0/24", "CidrBlockSet": [ { "CidrBlock": "10.0.17.0/24" } ], "OwnerId": "606671647913", "PeeringOptions": { "AllowDnsResolutionFromRemoteVpc": false, "AllowEgressFromLocalClassicLinkToRemoteVpc": false, "AllowEgressFromLocalVpcToRemoteClassicLink": false }, "VpcId": "vpc-0b721449398429559", "Region": "eu-west-1" }, "Status": { "Code": "initiating-request", "Message": "Initiating Request to 606671647913" }, "Tags": [], "VpcPeeringConnectionId": "pcx-0cb23d66dea3dca9f" } } Wait for Peering connection to exist Command: aws ec2 wait vpc-peering-connection-exists --vpc-peering-connection-ids pcx-0cb23d66dea3dca9f Accept the peering connection Command: aws ec2 accept-vpc-peering-connection \ --vpc-peering-connection-id pcx-0cb23d66dea3dca9f \ --region eu-west-1 Output: { "VpcPeeringConnection": { "AccepterVpcInfo": { "CidrBlock": "192.168.0.0/16", "CidrBlockSet": [ { "CidrBlock": "192.168.0.0/16" } ], "OwnerId": "606671647913", "PeeringOptions": { "AllowDnsResolutionFromRemoteVpc": false, "AllowEgressFromLocalClassicLinkToRemoteVpc": false, "AllowEgressFromLocalVpcToRemoteClassicLink": false }, "VpcId": "vpc-0b40bd7c59dbe4277", "Region": "eu-west-1" }, "RequesterVpcInfo": { "CidrBlock": "10.0.17.0/24", "CidrBlockSet": [ { "CidrBlock": "10.0.17.0/24" } ], "OwnerId": "606671647913", "PeeringOptions": { "AllowDnsResolutionFromRemoteVpc": false, "AllowEgressFromLocalClassicLinkToRemoteVpc": false, "AllowEgressFromLocalVpcToRemoteClassicLink": false }, "VpcId": "vpc-0b721449398429559", "Region": "eu-west-1" }, "Status": { "Code": "provisioning", "Message": "Provisioning" }, "Tags": [], "VpcPeeringConnectionId": "pcx-0cb23d66dea3dca9f" } } Update ROSA cluster VPC route-table Command: ROSA_PUBLIC_ROUTE_TABLE_ID=USD(aws ec2 describe-route-tables \ --filters "Name=vpc-id,Values=vpc-0b721449398429559" "Name=association.main,Values=true" \ 1 --query "RouteTables[*].RouteTableId" \ --output text \ --region eu-west-1 ) aws ec2 create-route \ --route-table-id USD{ROSA_PUBLIC_ROUTE_TABLE_ID} \ --destination-cidr-block 192.168.0.0/16 \ 2 --vpc-peering-connection-id pcx-0cb23d66dea3dca9f \ --region eu-west-1 1 ROSA cluster VPC 2 This must be the same as the cidr-block used when creating the Aurora VPC Update the Aurora Security Group Command: AURORA_SECURITY_GROUP_ID=USD(aws ec2 describe-security-groups \ --filters "Name=group-name,Values=keycloak-aurora-security-group" \ --query "SecurityGroups[*].GroupId" \ --region eu-west-1 \ --output text ) aws ec2 authorize-security-group-ingress \ --group-id USD{AURORA_SECURITY_GROUP_ID} \ --protocol tcp \ --port 5432 \ --cidr 10.0.17.0/24 \ 1 --region eu-west-1 1 The "machine_cidr" of the ROSA cluster Output: { "Return": true, "SecurityGroupRules": [ { "SecurityGroupRuleId": "sgr-0785d2f04b9cec3f5", "GroupId": "sg-0d746cc8ad8d2e63b", "GroupOwnerId": "606671647913", "IsEgress": false, "IpProtocol": "tcp", "FromPort": 5432, "ToPort": 5432, "CidrIpv4": "10.0.17.0/24" } ] } 4.3. Verifying the connection The simplest way to verify that a connection is possible between a ROSA cluster and an Aurora DB cluster is to deploy psql on the Openshift cluster and attempt to connect to the writer endpoint. The following command creates a pod in the default namespace and establishes a psql connection with the Aurora cluster if possible. Upon exiting the pod shell, the pod is deleted. USER=keycloak 1 PASSWORD=secret99 2 DATABASE=keycloak 3 HOST=USD(aws rds describe-db-clusters \ --db-cluster-identifier keycloak-aurora \ 4 --query 'DBClusters[*].Endpoint' \ --region eu-west-1 \ --output text ) oc run -i --tty --rm debug --image=postgres:15 --restart=Never -- psql postgresql://USD{USER}:USD{PASSWORD}@USD{HOST}/USD{DATABASE} 1 Aurora DB user, this can be the same as --master-username used when creating the DB. 2 Aurora DB user-password, this can be the same as --master- user-password used when creating the DB. 3 The name of the Aurora DB, such as --database-name . 4 The name of your Aurora DB cluster. 4.4. Deploying Red Hat build of Keycloak Now that an Aurora database has been established and linked with all of your ROSA clusters, the step is to deploy Red Hat build of Keycloak as described in the Deploy Red Hat build of Keycloak for HA with the Red Hat build of Keycloak Operator chapter with the JDBC url configured to use the Aurora database writer endpoint. To do this, create a Keycloak CR with the following adjustments: Update spec.db.url to be jdbc:aws-wrapper:postgresql://USDHOST:5432/keycloak where USDHOST is the Aurora writer endpoint URL . Ensure that the Secrets referenced by spec.db.usernameSecret and spec.db.passwordSecret contain usernames and passwords defined when creating Aurora.
[ "aws ec2 create-vpc --cidr-block 192.168.0.0/16 --tag-specifications \"ResourceType=vpc, Tags=[{Key=AuroraCluster,Value=keycloak-aurora}]\" \\ 1 --region eu-west-1", "{ \"Vpc\": { \"CidrBlock\": \"192.168.0.0/16\", \"DhcpOptionsId\": \"dopt-0bae7798158bc344f\", \"State\": \"pending\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\", \"InstanceTenancy\": \"default\", \"Ipv6CidrBlockAssociationSet\": [], \"CidrBlockAssociationSet\": [ { \"AssociationId\": \"vpc-cidr-assoc-09a02a83059ba5ab6\", \"CidrBlock\": \"192.168.0.0/16\", \"CidrBlockState\": { \"State\": \"associated\" } } ], \"IsDefault\": false } }", "aws ec2 create-subnet --availability-zone \"eu-west-1a\" --vpc-id vpc-0b40bd7c59dbe4277 --cidr-block 192.168.0.0/19 --region eu-west-1", "{ \"Subnet\": { \"AvailabilityZone\": \"eu-west-1a\", \"AvailabilityZoneId\": \"euw1-az3\", \"AvailableIpAddressCount\": 8187, \"CidrBlock\": \"192.168.0.0/19\", \"DefaultForAz\": false, \"MapPublicIpOnLaunch\": false, \"State\": \"available\", \"SubnetId\": \"subnet-0d491a1a798aa878d\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\", \"AssignIpv6AddressOnCreation\": false, \"Ipv6CidrBlockAssociationSet\": [], \"SubnetArn\": \"arn:aws:ec2:eu-west-1:606671647913:subnet/subnet-0d491a1a798aa878d\", \"EnableDns64\": false, \"Ipv6Native\": false, \"PrivateDnsNameOptionsOnLaunch\": { \"HostnameType\": \"ip-name\", \"EnableResourceNameDnsARecord\": false, \"EnableResourceNameDnsAAAARecord\": false } } }", "aws ec2 create-subnet --availability-zone \"eu-west-1b\" --vpc-id vpc-0b40bd7c59dbe4277 --cidr-block 192.168.32.0/19 --region eu-west-1", "{ \"Subnet\": { \"AvailabilityZone\": \"eu-west-1b\", \"AvailabilityZoneId\": \"euw1-az1\", \"AvailableIpAddressCount\": 8187, \"CidrBlock\": \"192.168.32.0/19\", \"DefaultForAz\": false, \"MapPublicIpOnLaunch\": false, \"State\": \"available\", \"SubnetId\": \"subnet-057181b1e3728530e\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\", \"AssignIpv6AddressOnCreation\": false, \"Ipv6CidrBlockAssociationSet\": [], \"SubnetArn\": \"arn:aws:ec2:eu-west-1:606671647913:subnet/subnet-057181b1e3728530e\", \"EnableDns64\": false, \"Ipv6Native\": false, \"PrivateDnsNameOptionsOnLaunch\": { \"HostnameType\": \"ip-name\", \"EnableResourceNameDnsARecord\": false, \"EnableResourceNameDnsAAAARecord\": false } } }", "aws ec2 describe-route-tables --filters Name=vpc-id,Values=vpc-0b40bd7c59dbe4277 --region eu-west-1", "{ \"RouteTables\": [ { \"Associations\": [ { \"Main\": true, \"RouteTableAssociationId\": \"rtbassoc-02dfa06f4c7b4f99a\", \"RouteTableId\": \"rtb-04a644ad3cd7de351\", \"AssociationState\": { \"State\": \"associated\" } } ], \"PropagatingVgws\": [], \"RouteTableId\": \"rtb-04a644ad3cd7de351\", \"Routes\": [ { \"DestinationCidrBlock\": \"192.168.0.0/16\", \"GatewayId\": \"local\", \"Origin\": \"CreateRouteTable\", \"State\": \"active\" } ], \"Tags\": [], \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"OwnerId\": \"606671647913\" } ] }", "aws ec2 associate-route-table --route-table-id rtb-04a644ad3cd7de351 --subnet-id subnet-0d491a1a798aa878d --region eu-west-1", "aws ec2 associate-route-table --route-table-id rtb-04a644ad3cd7de351 --subnet-id subnet-057181b1e3728530e --region eu-west-1", "aws rds create-db-subnet-group --db-subnet-group-name keycloak-aurora-subnet-group --db-subnet-group-description \"Aurora DB Subnet Group\" --subnet-ids subnet-0d491a1a798aa878d subnet-057181b1e3728530e --region eu-west-1", "aws ec2 create-security-group --group-name keycloak-aurora-security-group --description \"Aurora DB Security Group\" --vpc-id vpc-0b40bd7c59dbe4277 --region eu-west-1", "{ \"GroupId\": \"sg-0d746cc8ad8d2e63b\" }", "aws rds create-db-cluster --db-cluster-identifier keycloak-aurora --database-name keycloak --engine aurora-postgresql --engine-version USD{properties[\"aurora-postgresql.version\"]} --master-username keycloak --master-user-password secret99 --vpc-security-group-ids sg-0d746cc8ad8d2e63b --db-subnet-group-name keycloak-aurora-subnet-group --region eu-west-1", "{ \"DBCluster\": { \"AllocatedStorage\": 1, \"AvailabilityZones\": [ \"eu-west-1b\", \"eu-west-1c\", \"eu-west-1a\" ], \"BackupRetentionPeriod\": 1, \"DatabaseName\": \"keycloak\", \"DBClusterIdentifier\": \"keycloak-aurora\", \"DBClusterParameterGroup\": \"default.aurora-postgresql15\", \"DBSubnetGroup\": \"keycloak-aurora-subnet-group\", \"Status\": \"creating\", \"Endpoint\": \"keycloak-aurora.cluster-clhthfqe0h8p.eu-west-1.rds.amazonaws.com\", \"ReaderEndpoint\": \"keycloak-aurora.cluster-ro-clhthfqe0h8p.eu-west-1.rds.amazonaws.com\", \"MultiAZ\": false, \"Engine\": \"aurora-postgresql\", \"EngineVersion\": \"15.3\", \"Port\": 5432, \"MasterUsername\": \"keycloak\", \"PreferredBackupWindow\": \"02:21-02:51\", \"PreferredMaintenanceWindow\": \"fri:03:34-fri:04:04\", \"ReadReplicaIdentifiers\": [], \"DBClusterMembers\": [], \"VpcSecurityGroups\": [ { \"VpcSecurityGroupId\": \"sg-0d746cc8ad8d2e63b\", \"Status\": \"active\" } ], \"HostedZoneId\": \"Z29XKXDKYMONMX\", \"StorageEncrypted\": false, \"DbClusterResourceId\": \"cluster-IBWXUWQYM3MS5BH557ZJ6ZQU4I\", \"DBClusterArn\": \"arn:aws:rds:eu-west-1:606671647913:cluster:keycloak-aurora\", \"AssociatedRoles\": [], \"IAMDatabaseAuthenticationEnabled\": false, \"ClusterCreateTime\": \"2023-11-01T10:40:45.964000+00:00\", \"EngineMode\": \"provisioned\", \"DeletionProtection\": false, \"HttpEndpointEnabled\": false, \"CopyTagsToSnapshot\": false, \"CrossAccountClone\": false, \"DomainMemberships\": [], \"TagList\": [], \"AutoMinorVersionUpgrade\": true, \"NetworkType\": \"IPV4\" } }", "aws rds create-db-instance --db-cluster-identifier keycloak-aurora --db-instance-identifier \"keycloak-aurora-instance-1\" --db-instance-class db.t4g.large --engine aurora-postgresql --region eu-west-1", "aws rds create-db-instance --db-cluster-identifier keycloak-aurora --db-instance-identifier \"keycloak-aurora-instance-2\" --db-instance-class db.t4g.large --engine aurora-postgresql --region eu-west-1", "aws rds wait db-instance-available --db-instance-identifier keycloak-aurora-instance-1 --region eu-west-1 aws rds wait db-instance-available --db-instance-identifier keycloak-aurora-instance-2 --region eu-west-1", "aws rds describe-db-clusters --db-cluster-identifier keycloak-aurora --query 'DBClusters[*].Endpoint' --region eu-west-1 --output text", "[ \"keycloak-aurora.cluster-clhthfqe0h8p.eu-west-1.rds.amazonaws.com\" ]", "aws ec2 describe-vpcs --filters \"Name=tag:AuroraCluster,Values=keycloak-aurora\" --query 'Vpcs[*].VpcId' --region eu-west-1 --output text", "vpc-0b40bd7c59dbe4277", "NODE=USD(oc get nodes --selector=node-role.kubernetes.io/worker -o jsonpath='{.items[0].metadata.name}') aws ec2 describe-instances --filters \"Name=private-dns-name,Values=USD{NODE}\" --query 'Reservations[0].Instances[0].VpcId' --region eu-west-1 --output text", "vpc-0b721449398429559", "aws ec2 create-vpc-peering-connection --vpc-id vpc-0b721449398429559 \\ 1 --peer-vpc-id vpc-0b40bd7c59dbe4277 \\ 2 --peer-region eu-west-1 --region eu-west-1", "{ \"VpcPeeringConnection\": { \"AccepterVpcInfo\": { \"OwnerId\": \"606671647913\", \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"Region\": \"eu-west-1\" }, \"ExpirationTime\": \"2023-11-08T13:26:30+00:00\", \"RequesterVpcInfo\": { \"CidrBlock\": \"10.0.17.0/24\", \"CidrBlockSet\": [ { \"CidrBlock\": \"10.0.17.0/24\" } ], \"OwnerId\": \"606671647913\", \"PeeringOptions\": { \"AllowDnsResolutionFromRemoteVpc\": false, \"AllowEgressFromLocalClassicLinkToRemoteVpc\": false, \"AllowEgressFromLocalVpcToRemoteClassicLink\": false }, \"VpcId\": \"vpc-0b721449398429559\", \"Region\": \"eu-west-1\" }, \"Status\": { \"Code\": \"initiating-request\", \"Message\": \"Initiating Request to 606671647913\" }, \"Tags\": [], \"VpcPeeringConnectionId\": \"pcx-0cb23d66dea3dca9f\" } }", "aws ec2 wait vpc-peering-connection-exists --vpc-peering-connection-ids pcx-0cb23d66dea3dca9f", "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id pcx-0cb23d66dea3dca9f --region eu-west-1", "{ \"VpcPeeringConnection\": { \"AccepterVpcInfo\": { \"CidrBlock\": \"192.168.0.0/16\", \"CidrBlockSet\": [ { \"CidrBlock\": \"192.168.0.0/16\" } ], \"OwnerId\": \"606671647913\", \"PeeringOptions\": { \"AllowDnsResolutionFromRemoteVpc\": false, \"AllowEgressFromLocalClassicLinkToRemoteVpc\": false, \"AllowEgressFromLocalVpcToRemoteClassicLink\": false }, \"VpcId\": \"vpc-0b40bd7c59dbe4277\", \"Region\": \"eu-west-1\" }, \"RequesterVpcInfo\": { \"CidrBlock\": \"10.0.17.0/24\", \"CidrBlockSet\": [ { \"CidrBlock\": \"10.0.17.0/24\" } ], \"OwnerId\": \"606671647913\", \"PeeringOptions\": { \"AllowDnsResolutionFromRemoteVpc\": false, \"AllowEgressFromLocalClassicLinkToRemoteVpc\": false, \"AllowEgressFromLocalVpcToRemoteClassicLink\": false }, \"VpcId\": \"vpc-0b721449398429559\", \"Region\": \"eu-west-1\" }, \"Status\": { \"Code\": \"provisioning\", \"Message\": \"Provisioning\" }, \"Tags\": [], \"VpcPeeringConnectionId\": \"pcx-0cb23d66dea3dca9f\" } }", "ROSA_PUBLIC_ROUTE_TABLE_ID=USD(aws ec2 describe-route-tables --filters \"Name=vpc-id,Values=vpc-0b721449398429559\" \"Name=association.main,Values=true\" \\ 1 --query \"RouteTables[*].RouteTableId\" --output text --region eu-west-1 ) aws ec2 create-route --route-table-id USD{ROSA_PUBLIC_ROUTE_TABLE_ID} --destination-cidr-block 192.168.0.0/16 \\ 2 --vpc-peering-connection-id pcx-0cb23d66dea3dca9f --region eu-west-1", "AURORA_SECURITY_GROUP_ID=USD(aws ec2 describe-security-groups --filters \"Name=group-name,Values=keycloak-aurora-security-group\" --query \"SecurityGroups[*].GroupId\" --region eu-west-1 --output text ) aws ec2 authorize-security-group-ingress --group-id USD{AURORA_SECURITY_GROUP_ID} --protocol tcp --port 5432 --cidr 10.0.17.0/24 \\ 1 --region eu-west-1", "{ \"Return\": true, \"SecurityGroupRules\": [ { \"SecurityGroupRuleId\": \"sgr-0785d2f04b9cec3f5\", \"GroupId\": \"sg-0d746cc8ad8d2e63b\", \"GroupOwnerId\": \"606671647913\", \"IsEgress\": false, \"IpProtocol\": \"tcp\", \"FromPort\": 5432, \"ToPort\": 5432, \"CidrIpv4\": \"10.0.17.0/24\" } ] }", "USER=keycloak 1 PASSWORD=secret99 2 DATABASE=keycloak 3 HOST=USD(aws rds describe-db-clusters --db-cluster-identifier keycloak-aurora \\ 4 --query 'DBClusters[*].Endpoint' --region eu-west-1 --output text ) run -i --tty --rm debug --image=postgres:15 --restart=Never -- psql postgresql://USD{USER}:USD{PASSWORD}@USD{HOST}/USD{DATABASE}" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/high_availability_guide/deploy-aurora-multi-az-
Chapter 3. Features
Chapter 3. Features AMQ Streams 2.5 introduces the features described in this section. AMQ Streams 2.5 on OpenShift is based on Apache Kafka 3.5.0 and Strimzi 0.36.x. Note To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project . 3.1. AMQ Streams 2.5.x (Long Term Support) AMQ Streams 2.5.x is the Long Term Support (LTS) offering for AMQ Streams. The latest patch release is AMQ Streams 2.5.2. The AMQ Streams product images have changed to version 2.5.2. Although the supported Kafka version is listed as 3.5.0, it incorporates updates and improvements from Kafka 3.5.2. For information on the LTS terms and dates, see the AMQ Streams LTS Support Policy . 3.2. OpenShift Container Platform support AMQ Streams 2.5 is supported on OpenShift Container Platform 4.12 and later. For more information, see Chapter 11, Supported Configurations . 3.3. Kafka 3.5.x support AMQ Streams supports and uses Apache Kafka version 3.5.0. Updates for Kafka 3.5.2 are incorporated with the 2.5.2 patch release. Only Kafka distributions built by Red Hat are supported. You must upgrade the Cluster Operator to AMQ Streams version 2.5 before you can upgrade brokers and client applications to Kafka 3.5.0. For upgrade instructions, see Upgrading AMQ Streams . Refer to the Kafka 3.5.0 , Kafka 3.5.1 , and Kafka 3.5.2 Release Notes for additional information. Kafka 3.4.x is supported only for the purpose of upgrading to AMQ Streams 2.5. Note Kafka 3.5.x provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol. KRaft mode is available as a Developer Preview . 3.4. Supporting the v1beta2 API version The v1beta2 API version for all custom resources was introduced with AMQ Streams 1.7. For AMQ Streams 1.8, v1alpha1 and v1beta1 API versions were removed from all AMQ Streams custom resources apart from KafkaTopic and KafkaUser . Upgrade of the custom resources to v1beta2 prepares AMQ Streams for a move to Kubernetes CRD v1 , which is required for Kubernetes 1.22. If you are upgrading from an AMQ Streams version prior to version 1.7: Upgrade to AMQ Streams 1.7 Convert the custom resources to v1beta2 Upgrade to AMQ Streams 1.8 Important You must upgrade your custom resources to use API version v1beta2 before upgrading to AMQ Streams version 2.5. 3.4.1. Upgrading custom resources to v1beta2 To support the upgrade of custom resources to v1beta2 , AMQ Streams provides an API conversion tool , which you can download from the AMQ Streams 1.8 software downloads page . You perform the custom resources upgrades in two steps. Step one: Convert the format of custom resources Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2 in one of two ways: Converting the YAML files that describe the configuration for AMQ Streams custom resources Converting AMQ Streams custom resources directly in the cluster Alternatively, you can manually convert each custom resource into a format applicable to v1beta2 . Instructions for manually converting custom resources are included in the documentation. Step two: Upgrade CRDs to v1beta2 , using the API conversion tool with the crd-upgrade command, you must set v1beta2 as the storage API version in your CRDs. You cannot perform this step manually. For more information, see Upgrading from an AMQ Streams version earlier than 1.7 . 3.5. (Preview) Node pools for managing nodes in a Kafka cluster This release introduces the KafkaNodePools feature gate and a new KafkaNodePool custom resource that enables the configuration of different pools of Apache Kafka nodes. This feature gate is at an alpha level of maturity, which means that it is disabled by default, and should be treated as a developer preview . A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. The KafkaNodePool custom resource represents the configuration for nodes only in the node pool. Each pool has its own unique configuration, which includes mandatory settings such as the number of replicas, storage configuration, and a list of assigned roles. As you can assign roles to the nodes in a node pool, you can try the feature with a Kafka cluster that uses ZooKeeper for cluster management or KRaft mode. To enable the KafkaNodePools feature gate, specify +KafkaNodePools in the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration. Enabling the KafkaNodePools feature gate env: - name: STRIMZI_FEATURE_GATES value: +KafkaNodePools Note Drain Cleaner is not supported for the node pools preview. See Configuring node pools . 3.6. (Preview) Unidirectional topic management using the Topic Operator This release also incorporates the UnidirectionalTopicOperator feature gate, introducing a unidirectional topic management mode. With unidirectional mode, you create Kafka topics using the KafkaTopic resource, which are then managed by the Topic Operator. This feature gate is at an alpha level of maturity, and should be treated as a developer preview . To enable the UnidirectionalTopicOperator feature gate, specify +UnidirectionalTopicOperator in the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration. Enabling the UnidirectionalTopicOperator feature gate env: - name: STRIMZI_FEATURE_GATES value: +UnidirectionalTopicOperator Up to this release, the only way to use the Topic Operator to manage topics was in bidirectional mode, which is compatible with using ZooKeeper for cluster management. Unidirectional mode does not require ZooKeeper for cluster management, which is an important development as Kafka moves to using KRaft mode for managing clusters. See Using the Topic Operator . 3.7. Reporting tool for retrieving diagnostic and troubleshooting data The report.sh diagnostics tool is a script provided by Red Hat to gather essential data for troubleshooting AMQ Streams deployments on OpenShift. It collects relevant logs, configuration files, and other diagnostic data to assist in identifying and resolving issues. When you run the script, you can use additional parameters to retrieve specific data. The tool requires the OpenShift oc command-line tool to establish a connection to the running cluster. After which you can open a terminal and run the tool to retrieve data on components. From the following request, data is collected on a Kafka cluster, a Kafka Bridge cluster, and on secret keys and data values: Example request with data collection options ./report.sh --namespace=my-amq-streams-namespace --cluster=my-kafka-cluster --bridge=my-bridge-component --secrets=all --out-dir=~/reports The data is output to a specified directory. See Retrieving diagnostic and troubleshooting data . 3.8. OpenTelemetry for distributed tracing OpenTelemetry for distributed tracing has moved to GA. You can use OpenTelemetry with a specified tracing system. OpenTelemetry has replaced OpenTracing for distributed tracing. Support for OpenTracing is deprecated . By Default, OpenTelemetry uses the OTLP (OpenTelemetry Protocol) exporter for tracing. AMQ Streams with OpenTelemetry is distributed for use with the Jaeger exporter, but you can specify other tracing systems supported by OpenTelemetry. AMQ Streams plans to migrate to using OpenTelemetry with the OTLP exporter by default, and is phasing out support for the Jaeger exporter. See Introducing distributed tracing .
[ "env: - name: STRIMZI_FEATURE_GATES value: +KafkaNodePools", "env: - name: STRIMZI_FEATURE_GATES value: +UnidirectionalTopicOperator", "./report.sh --namespace=my-amq-streams-namespace --cluster=my-kafka-cluster --bridge=my-bridge-component --secrets=all --out-dir=~/reports" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/release_notes_for_amq_streams_2.5_on_openshift/features-str
4.5. Testing Your Models
4.5. Testing Your Models Designing and working with data is often much easier when you can see the information you are working with. The Teiid Designer's Preview Data feature makes this possible and allows you to instantly preview the information described by any object, whether it's a physical table or a virtual view. In other words, you can test the views with actual data by simply selecting the table, view, procedure or XML document. The preview functionality insures that data access behavior in Teiid Designer will reliably match when the VDB is deployed to the Server. Previewing information is a fast and easy way to sample the data. Of course, to run more complicated queries like what your application likely uses, simply execute the VDB in Teiid Designer and type in any query or SQL statement. After creating your models, you can test them by using the Preview Data action. By selecting a desired table object and executing the action, the results of a simple query will be displayed in the Data Tools SQL Results view. This action is accessible throughout the Teiid Designer in various view toolbars and context menus. Previewable objects include: Relational table or view, including tables involving access patterns Relational procedure Web Service operation XML Document staging table Note If attempting to preview a relational access pattern, a web service operation or a relational procedure with input parameters, a dialog will request values for required parameters.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/testing_your_models
8.4. Install the ODBC Driver on Microsoft Windows
8.4. Install the ODBC Driver on Microsoft Windows Prerequisites Administrative permissions are required. Procedure 8.1. Install the ODBC Driver on Microsoft Windows Download the correct ODBC driver package ( jboss-dv-psqlodbc-[version]- X .zip ) from https://access.redhat.com/jbossnetwork/ . Unzip the installation package. Double-click the jboss-dv-psqlodbc-[version]- X .msi file to start the installer. The installer wizard is displayed. Click . The End-User License Agreement will be displayed. Click I accept the terms in the License Agreement if you accept the licensing terms and then click . If you want to install in a different directory other than the default directory shown, click the Browse button and select a directory. Click . You are presented with a confirmation screen. Review the choices you have made and click to begin installation. Click Finish . Note Installation packages for different operating systems can be downloaded from http://access.redhat.com .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/install_the_odbc_driver_on_microsoft_windows1
Add-on services
Add-on services Red Hat OpenShift Service on AWS 4 Adding services to Red Hat OpenShift Service on AWS clusters Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/add-on_services/index
20.5. Displaying Information about a Guest Virtual Machine and the Hypervisor
20.5. Displaying Information about a Guest Virtual Machine and the Hypervisor The virsh list command will list guest virtual machines connected to your hypervisor that fit the search parameter requested. The output of the command has 3 columns in a table. Each guest virtual machine is listed with its ID, name, and state . A wide variety of search parameters is available for virsh list . These options are available on the man page, by running man virsh or by running the virsh list --help command. Note Note that this if this command only displays guest virtual machines created by the root user. If it does not display a virtual machine you know you have created, it is probable you did not create the virtual machine as root. Guests created using the virt-manager interface are by default created by root. Example 20.1. How to list all locally connected virtual machines The following example lists all the virtual machines your hypervisor is connected to. Note that this command lists both persistent and transient virtual machines. Example 20.2. How to list the inactive guest virtual machines The following example lists guests that are currently inactive, or not running. Note that the list only contains persistent virtual machines. In addition, the following commands can also be used to display basic information about the hypervisor: # virsh hostname - displays the hypervisor's host name, for example: # virsh sysinfo - displays the XML representation of the hypervisor's system information, if available, for example:
[ "virsh list --all Id Name State ------------------------------------------------ 8 guest1 running 22 guest2 paused 35 guest3 shut off 38 guest4 shut off", "virsh list --inactive Id Name State ------------------------------------------------ 35 guest3 shut off 38 guest4 shut off", "virsh hostname dhcp-2-157.eus.myhost.com", "virsh sysinfo <sysinfo type='smbios'> <bios> <entry name='vendor'>LENOVO</entry> <entry name='version'>GJET71WW (2.21 )</entry> [...]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-domain_commands-editing_and_displaying_a_description_and_title_of_a_domain
7.318. mysql
7.318. mysql 7.318.1. RHSA-2013:0772 - Important: mysql security update Updated mysql packages that fix several security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. MySQL is a multi-user, multi-threaded SQL database server. It consists of the MySQL server daemon (mysqld) and many client programs and libraries. Security Fix CVE-2012-5614 , CVE-2013-1506 , CVE-2013-1521 , CVE-2013-1531 , CVE-2013-1532 , CVE-2013-1544 , CVE-2013-1548 , CVE-2013-1552 , CVE-2013-1555 , CVE-2013-2375 , CVE-2013-2378 , CVE-2013-2389 , CVE-2013-2391 , CVE-2013-2392 This update fixes several vulnerabilities in the MySQL database server. Information about these flaws can be found on the Oracle Critical Patch Update Advisory page . These updated packages upgrade MySQL to version 5.1.69. For more information, refer to the MYSQL release notes located here: http://dev.mysql.com/doc/relnotes/mysql/5.1/en/news-5-1-68.html http://dev.mysql.com/doc/relnotes/mysql/5.1/en/news-5-1-69.html All MySQL users should upgrade to these updated packages, which correct these issues. After installing this update, the MySQL server daemon (mysqld) will be restarted automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/mysql
Chapter 8. Set Up Replication Mode
Chapter 8. Set Up Replication Mode 8.1. About Replication Mode Red Hat JBoss Data Grid's replication mode is a simple clustered mode. Cache instances automatically discover neighboring instances on other Java Virtual Machines (JVM) on the same network and subsequently form a cluster with the discovered instances. Any entry added to a cache instance is replicated across all cache instances in the cluster and can be retrieved locally from any cluster cache instance. In JBoss Data Grid's replication mode, return values are locally available before the replication occurs. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-set_up_replication_mode
Configuring notifications on the Red Hat Hybrid Cloud Console
Configuring notifications on the Red Hat Hybrid Cloud Console Red Hat Hybrid Cloud Console 1-latest Configuring Hybrid Cloud Console settings so that account users receive event-triggered notifications Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html-single/configuring_notifications_on_the_red_hat_hybrid_cloud_console/index
7.2.2. Configuration changes for Windows virtual machines
7.2.2. Configuration changes for Windows virtual machines Warning Before converting Windows virtual machines, ensure that the libguestfs-winsupport and virtio-win packages are installed on the host running virt-v2v . These packages provide support for NTFS and Windows paravirtualized block and network drivers. If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. If you attempt to convert a virtual machine running Windows without the virtio-win package installed, the conversion will fail giving an error message concerning missing files. See Section 4.3.1.2, "Preparing to convert a virtual machine running Windows" for details. virt-v2v can convert virtual machines running Windows XP, Windows Vista, Windows 7, Windows Server 2003 and Windows Server 2008. The conversion process for virtual machines running Windows is slightly to different to the process for virtual machines running Linux. Windows virtual machine images are converted as follows: virt-v2v installs VirtIO block drivers. virt-v2v installs the CDUpgrader utility. virt-v2v copies VirtIO block and network drivers to %SystemRoot%\Drivers\VirtIO . The virtio-win package does not include network drivers for Windows 7 and Windows XP. For those operating systems, the rtl8139 network drivers are used. rtl8139 support must be already available in the guest virtual machine. virt-v2v adds %SystemRoot%\Drivers\VirtIO to DevicePath , meaning this directory is automatically searched for drivers when a new device is detected. virt-v2v makes registry changes to include the VirtIO block drivers in the CriticalDeviceDatabase section of the registry, and ensure the CDUpgrader service is started at the boot. At this point, virt-v2v has completed the conversion. The converted virtual machine is now fully functional, and the conversion is complete for output to KVM managed by libvirt. If the virtual machine is being converted for output to Red Hat Enterprise Virtualization, the Red Hat Enterprise Virtualization Manager will perform additional steps to complete the conversion: The virtual machine is imported and run on the Manager. See the Red Hat Enterprise Virtualization Administration Guide for details. Important The first boot stage can take several minutes to run, and must not be interrupted. It will run automatically without any administrator intervention other than starting the virtual machine. To ensure the process is not interrupted, no user should log in to the virtual machine until it has quiesced. You can check for this in the Manager GUI. If the guest tools ISO has been uploaded to the Manager, as detailed in Section 4.3.1.2, "Preparing to convert a virtual machine running Windows" , the Manager attaches the guest tools CD to the virtual machine. CDUpgrader detects the guest tools ISO and installs all the VirtIO drivers from it, including additional tools that are not included in virtio-win . The VirtIO drivers are reinstalled if the drivers in the guest tools ISO are newer than the ones previously installed from virtio-win . This ensures that the tools are kept up to date.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-v2v_guide-configuration_changes-configuration_changes_for_windows_virtual_machines
probe::ipmib.ForwDatagrams
probe::ipmib.ForwDatagrams Name probe::ipmib.ForwDatagrams - Count forwarded packet Synopsis ipmib.ForwDatagrams Values op value to be added to the counter (default value of 1) skb pointer to the struct sk_buff being acted on Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global ForwDatagrams (equivalent to SNMP's MIB IPSTATS_MIB_OUTFORWDATAGRAMS)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ipmib-forwdatagrams
Chapter 13. Ingress [networking.k8s.io/v1]
Chapter 13. Ingress [networking.k8s.io/v1] Description Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. Type object 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object IngressSpec describes the Ingress the user wishes to exist. status object IngressStatus describe the current state of the Ingress. 13.1.1. .spec Description IngressSpec describes the Ingress the user wishes to exist. Type object Property Type Description defaultBackend object IngressBackend describes all endpoints for a given service and port. ingressClassName string ingressClassName is the name of an IngressClass cluster resource. Ingress controller implementations use this field to know whether they should be serving this Ingress resource, by a transitive connection (controller IngressClass Ingress resource). Although the kubernetes.io/ingress.class annotation (simple constant name) was never formally defined, it was widely supported by Ingress controllers to create a direct binding between Ingress controller and Ingress resources. Newly created Ingress resources should prefer using the field. However, even though the annotation is officially deprecated, for backwards compatibility reasons, ingress controllers should still honor that annotation if present. rules array rules is a list of host rules used to configure the Ingress. If unspecified, or no rule matches, all traffic is sent to the default backend. rules[] object IngressRule represents the rules mapping the paths under a specified host to the related backend services. Incoming requests are first evaluated for a host match, then routed to the backend associated with the matching IngressRuleValue. tls array tls represents the TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI. tls[] object IngressTLS describes the transport layer security associated with an ingress. 13.1.2. .spec.defaultBackend Description IngressBackend describes all endpoints for a given service and port. Type object Property Type Description resource TypedLocalObjectReference resource is an ObjectRef to another Kubernetes resource in the namespace of the Ingress object. If resource is specified, a service.Name and service.Port must not be specified. This is a mutually exclusive setting with "Service". service object IngressServiceBackend references a Kubernetes Service as a Backend. 13.1.3. .spec.defaultBackend.service Description IngressServiceBackend references a Kubernetes Service as a Backend. Type object Required name Property Type Description name string name is the referenced service. The service must exist in the same namespace as the Ingress object. port object ServiceBackendPort is the service port being referenced. 13.1.4. .spec.defaultBackend.service.port Description ServiceBackendPort is the service port being referenced. Type object Property Type Description name string name is the name of the port on the Service. This is a mutually exclusive setting with "Number". number integer number is the numerical port number (e.g. 80) on the Service. This is a mutually exclusive setting with "Name". 13.1.5. .spec.rules Description rules is a list of host rules used to configure the Ingress. If unspecified, or no rule matches, all traffic is sent to the default backend. Type array 13.1.6. .spec.rules[] Description IngressRule represents the rules mapping the paths under a specified host to the related backend services. Incoming requests are first evaluated for a host match, then routed to the backend associated with the matching IngressRuleValue. Type object Property Type Description host string host is the fully qualified domain name of a network host, as defined by RFC 3986. Note the following deviations from the "host" part of the URI as defined in RFC 3986: 1. IPs are not allowed. Currently an IngressRuleValue can only apply to the IP in the Spec of the parent Ingress. 2. The : delimiter is not respected because ports are not allowed. Currently the port of an Ingress is implicitly :80 for http and :443 for https. Both these may change in the future. Incoming requests are matched against the host before the IngressRuleValue. If the host is unspecified, the Ingress routes all traffic based on the specified IngressRuleValue. host can be "precise" which is a domain name without the terminating dot of a network host (e.g. "foo.bar.com") or "wildcard", which is a domain name prefixed with a single wildcard label (e.g. " .foo.com"). The wildcard character ' ' must appear by itself as the first DNS label and matches only a single label. You cannot have a wildcard label by itself (e.g. Host == "*"). Requests will be matched against the Host field in the following way: 1. If host is precise, the request matches this rule if the http host header is equal to Host. 2. If host is a wildcard, then the request matches this rule if the http host header is to equal to the suffix (removing the first label) of the wildcard rule. http object HTTPIngressRuleValue is a list of http selectors pointing to backends. In the example: http://<host>/<path>?<searchpart> backend where where parts of the url correspond to RFC 3986, this resource will be used to match against everything after the last '/' and before the first '?' or '#'. 13.1.7. .spec.rules[].http Description HTTPIngressRuleValue is a list of http selectors pointing to backends. In the example: http://<host>/<path>?<searchpart> backend where where parts of the url correspond to RFC 3986, this resource will be used to match against everything after the last '/' and before the first '?' or '#'. Type object Required paths Property Type Description paths array paths is a collection of paths that map requests to backends. paths[] object HTTPIngressPath associates a path with a backend. Incoming urls matching the path are forwarded to the backend. 13.1.8. .spec.rules[].http.paths Description paths is a collection of paths that map requests to backends. Type array 13.1.9. .spec.rules[].http.paths[] Description HTTPIngressPath associates a path with a backend. Incoming urls matching the path are forwarded to the backend. Type object Required pathType backend Property Type Description backend object IngressBackend describes all endpoints for a given service and port. path string path is matched against the path of an incoming request. Currently it can contain characters disallowed from the conventional "path" part of a URL as defined by RFC 3986. Paths must begin with a '/' and must be present when using PathType with value "Exact" or "Prefix". pathType string pathType determines the interpretation of the path matching. PathType can be one of the following values: * Exact: Matches the URL path exactly. * Prefix: Matches based on a URL path prefix split by '/'. Matching is done on a path element by element basis. A path element refers is the list of labels in the path split by the '/' separator. A request is a match for path p if every p is an element-wise prefix of p of the request path. Note that if the last element of the path is a substring of the last element in request path, it is not a match (e.g. /foo/bar matches /foo/bar/baz, but does not match /foo/barbaz). * ImplementationSpecific: Interpretation of the Path matching is up to the IngressClass. Implementations can treat this as a separate PathType or treat it identically to Prefix or Exact path types. Implementations are required to support all path types. Possible enum values: - "Exact" matches the URL path exactly and with case sensitivity. - "ImplementationSpecific" matching is up to the IngressClass. Implementations can treat this as a separate PathType or treat it identically to Prefix or Exact path types. - "Prefix" matches based on a URL path prefix split by '/'. Matching is case sensitive and done on a path element by element basis. A path element refers to the list of labels in the path split by the '/' separator. A request is a match for path p if every p is an element-wise prefix of p of the request path. Note that if the last element of the path is a substring of the last element in request path, it is not a match (e.g. /foo/bar matches /foo/bar/baz, but does not match /foo/barbaz). If multiple matching paths exist in an Ingress spec, the longest matching path is given priority. Examples: - /foo/bar does not match requests to /foo/barbaz - /foo/bar matches request to /foo/bar and /foo/bar/baz - /foo and /foo/ both match requests to /foo and /foo/. If both paths are present in an Ingress spec, the longest matching path (/foo/) is given priority. 13.1.10. .spec.rules[].http.paths[].backend Description IngressBackend describes all endpoints for a given service and port. Type object Property Type Description resource TypedLocalObjectReference resource is an ObjectRef to another Kubernetes resource in the namespace of the Ingress object. If resource is specified, a service.Name and service.Port must not be specified. This is a mutually exclusive setting with "Service". service object IngressServiceBackend references a Kubernetes Service as a Backend. 13.1.11. .spec.rules[].http.paths[].backend.service Description IngressServiceBackend references a Kubernetes Service as a Backend. Type object Required name Property Type Description name string name is the referenced service. The service must exist in the same namespace as the Ingress object. port object ServiceBackendPort is the service port being referenced. 13.1.12. .spec.rules[].http.paths[].backend.service.port Description ServiceBackendPort is the service port being referenced. Type object Property Type Description name string name is the name of the port on the Service. This is a mutually exclusive setting with "Number". number integer number is the numerical port number (e.g. 80) on the Service. This is a mutually exclusive setting with "Name". 13.1.13. .spec.tls Description tls represents the TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI. Type array 13.1.14. .spec.tls[] Description IngressTLS describes the transport layer security associated with an ingress. Type object Property Type Description hosts array (string) hosts is a list of hosts included in the TLS certificate. The values in this list must match the name/s used in the tlsSecret. Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress, if left unspecified. secretName string secretName is the name of the secret used to terminate TLS traffic on port 443. Field is left optional to allow TLS routing based on SNI hostname alone. If the SNI host in a listener conflicts with the "Host" header field used by an IngressRule, the SNI host is used for termination and value of the "Host" header is used for routing. 13.1.15. .status Description IngressStatus describe the current state of the Ingress. Type object Property Type Description loadBalancer object IngressLoadBalancerStatus represents the status of a load-balancer. 13.1.16. .status.loadBalancer Description IngressLoadBalancerStatus represents the status of a load-balancer. Type object Property Type Description ingress array ingress is a list containing ingress points for the load-balancer. ingress[] object IngressLoadBalancerIngress represents the status of a load-balancer ingress point. 13.1.17. .status.loadBalancer.ingress Description ingress is a list containing ingress points for the load-balancer. Type array 13.1.18. .status.loadBalancer.ingress[] Description IngressLoadBalancerIngress represents the status of a load-balancer ingress point. Type object Property Type Description hostname string hostname is set for load-balancer ingress points that are DNS based. ip string ip is set for load-balancer ingress points that are IP based. ports array ports provides information about the ports exposed by this LoadBalancer. ports[] object IngressPortStatus represents the error condition of a service port 13.1.19. .status.loadBalancer.ingress[].ports Description ports provides information about the ports exposed by this LoadBalancer. Type array 13.1.20. .status.loadBalancer.ingress[].ports[] Description IngressPortStatus represents the error condition of a service port Type object Required port protocol Property Type Description error string error is to record the problem with the service port The format of the error shall comply with the following rules: - built-in error values shall be specified in this file and those shall use CamelCase names - cloud provider specific error values must have names that comply with the format foo.example.com/CamelCase. port integer port is the port number of the ingress port. protocol string protocol is the protocol of the ingress port. The supported values are: "TCP", "UDP", "SCTP" Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 13.2. API endpoints The following API endpoints are available: /apis/networking.k8s.io/v1/ingresses GET : list or watch objects of kind Ingress /apis/networking.k8s.io/v1/watch/ingresses GET : watch individual changes to a list of Ingress. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses DELETE : delete collection of Ingress GET : list or watch objects of kind Ingress POST : create an Ingress /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/ingresses GET : watch individual changes to a list of Ingress. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name} DELETE : delete an Ingress GET : read the specified Ingress PATCH : partially update the specified Ingress PUT : replace the specified Ingress /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/ingresses/{name} GET : watch changes to an object of kind Ingress. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name}/status GET : read status of the specified Ingress PATCH : partially update status of the specified Ingress PUT : replace status of the specified Ingress 13.2.1. /apis/networking.k8s.io/v1/ingresses HTTP method GET Description list or watch objects of kind Ingress Table 13.1. HTTP responses HTTP code Reponse body 200 - OK IngressList schema 401 - Unauthorized Empty 13.2.2. /apis/networking.k8s.io/v1/watch/ingresses HTTP method GET Description watch individual changes to a list of Ingress. deprecated: use the 'watch' parameter with a list operation instead. Table 13.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.3. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses HTTP method DELETE Description delete collection of Ingress Table 13.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Ingress Table 13.5. HTTP responses HTTP code Reponse body 200 - OK IngressList schema 401 - Unauthorized Empty HTTP method POST Description create an Ingress Table 13.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.7. Body parameters Parameter Type Description body Ingress schema Table 13.8. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 202 - Accepted Ingress schema 401 - Unauthorized Empty 13.2.4. /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/ingresses HTTP method GET Description watch individual changes to a list of Ingress. deprecated: use the 'watch' parameter with a list operation instead. Table 13.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.5. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name} Table 13.10. Global path parameters Parameter Type Description name string name of the Ingress HTTP method DELETE Description delete an Ingress Table 13.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Ingress Table 13.13. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Ingress Table 13.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.15. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Ingress Table 13.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.17. Body parameters Parameter Type Description body Ingress schema Table 13.18. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty 13.2.6. /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/ingresses/{name} Table 13.19. Global path parameters Parameter Type Description name string name of the Ingress HTTP method GET Description watch changes to an object of kind Ingress. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 13.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.7. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name}/status Table 13.21. Global path parameters Parameter Type Description name string name of the Ingress HTTP method GET Description read status of the specified Ingress Table 13.22. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Ingress Table 13.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.24. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Ingress Table 13.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.26. Body parameters Parameter Type Description body Ingress schema Table 13.27. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_apis/ingress-networking-k8s-io-v1
Chapter 2. User interfaces
Chapter 2. User interfaces There are different interfaces for managing certificates and subsystems, depending on the user's role: administrators, agents, auditors, and end users. 2.1. User interfaces overview Administrators can use the following interfaces to securely interact with a completed Certificate System installation: The PKI command-line interface and other command-line utilities The PKI Console graphical interface The Certificate System web interface. Which interface is used depends on the administrator's preferences and functionality available. Common actions using these interfaces are described in the remainder of the guide after this chapter. These interfaces require configuration prior to use for secure communication with the Certificate System server over TLS. Using these clients without proper configuration is not allowed. Some of these tools use TLS mutual authentication. When required, their required initialization procedure includes configuring this. Some examples of using the PKI command-line utility are described in Section 2.5.1.2, "Using the "pki" CLI" . Additional examples are shown through the rest of the guide. By default, the PKI command-line interface uses the NSS database in the user's ~/.dogtag/nssdb/ directory. Section 2.5.1.1, "Initializing the pki CLI" provides detailed steps for initializing the NSS database with the administrator's certificate and key. In addition, there are various command-line utilities used to interface with Certificate System (as an administrator in other user roles), for example to submit CMC requests, manage generated certificates, and so on. These utilities are described briefly in Section 2.5, "Command-line interfaces" , such as Section 2.5.2, "AtoB" . They are utilized in later sections such as Section 5.2.2, "Creating a CSR using PKCS10Client" . The Certificate System web interface allows administrative access through the Firefox web browser. Section 2.4.1, "Browser initialization" describes instructions about configuring the client authentication. Other sections in Section 2.4, "Web interface" describe using the web interface of Certificate System. The Certificate System's PKI Console is a graphical interface. Please note that it is being deprecated. Section 2.3.1, "Initializing pkiconsole " describes how to initialize this console interface. Section 2.3.2, "Using pkiconsole for CA, OCSP, KRA, and TKS subsystems" gives an overview of using it. Note To terminate a PKI Console session, click the Exit button. To terminate a web browser session, close the browser. A command-line utility terminates itself as soon as it performs the action and returns to the prompt, so no action is needed on the administrator's part to terminate the session. 2.2. Client NSS database initialization On Red Hat Certificate System, certain interfaces may need to access the server using TLS client certificate authentication (mutual authentication). Before performing server-side admin tasks, you need to: Prepare an NSS database for the client. This can be a new database or an existing one. Import the CA certificate chain and trust them. Have a certificate and corresponding key. They can be generated in the NSS database or imported from somewhere else, such as from a PKCS #12 file. Based on the utility, you need to initialize the NSS database accordingly. See: Section 2.5.1.1, "Initializing the pki CLI" Section 2.3.1, "Initializing pkiconsole " Section 2.4.1, "Browser initialization" 2.3. Graphical interface The Certificate System console, pkiconsole , is a graphical interface that is designed for users with the Administrator role privilege to manage the subsystem itself. This includes adding users, configuring logs, managing profiles and plugins, and the internal database, among many other functions. This utility communicates with the Certificate System server via TLS using client-authentication and can be used to manage the server remotely. Important pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. 2.3.1. Initializing pkiconsole To use the pkiconsole interface for the first time, specify a new password and use the following command: This command creates a new client NSS database in the ~/.redhat-idm-console/ directory. To import the CA certificate into the PKI client NSS database, see 10.5 Importing a certificate into an NSS Database in the Planning, Installation and Deployment Guide (Common Criteria Edition) . To request a new client certificate, see Chapter 5, Requesting, enrolling and managing certificates . Execute the following command to extract the admin client certificate from the .p12 file: Validate and import the admin client certificate as described in Chapter 10 Managing Certificate/Key Crypto Token in the Planning, Installation and Deployment Guide (Common Criteria Edition) : Important Make sure all intermediate certificates and the root CA certificate have been imported before importing the CA admin client certificate. To import an existing client certificate and its key into the client NSS database: Verify the client certificate with the following command: 2.3.2. Using pkiconsole for CA, OCSP, KRA, and TKS subsystems The Java console is used by four subsystems: the CA, OCSP, KRA, and TKS. The console is accessed using a locally-installed pkiconsole utility. It can access any subsystem because the command requires the host name, the subsystem's administrative TLS port, and the specific subsystem type. If DNS is not configured, you can use an IPv4 or IPv6 address to connect to the console. For example: This opens a console, as in the below figure: Figure 2.1. Certificate System console The Configuration tab controls all of the setup for the subsystem, as the name implies. The choices available in this tab are different depending on which subsystem type the instance is; the CA has the most options since it has additional configuration for jobs, notifications, and certificate enrollment authentication. All subsystems have four basic options: Users and groups Access control lists Log configuration Subsystem certificates (meaning the certificates issued to the subsystem for use, for example, in the security domain or audit signing) The Status tab shows the logs maintained by the subsystem. 2.4. Web interface This section describes the web interface that allows administrative access to Red Hat Certificate System through the Firefox web browser. 2.4.1. Browser initialization This section explains browser initialization for Firefox to access PKI services. Importing a CA certificate Click Menu Preferences Privacy & Security View certificates . Select the Authorities tab and click the Import button. Select the ca.crt file and click Import . Importing a client certificate Click Options Preferences Privacy & Security View certificates . Select the Your Certificates tab. Click on Import and select the client p12 file, such as ca_admin_cert.p12 . Enter the password for the client certificate on the prompt. Click OK . Verify that an entry is added under Your Certificates . Accessing the web console You can access the PKI services by opening https:// host_name :port in your browser. 2.4.2. The administrative interfaces All subsystems use a HTML-based administrative interface. It is accessed by entering the host name and secure port as the URL, authenticating with the administrator's certificate, and clicking the appropriate Administrators link. Note There is a single TLS port for all subsystems which is used for both administrator and agent services. Access to those services is restricted by certificate-based authentication. The HTML admin interface is much more limited than the Java console; the primary administrative function is managing the subsystem users. The TPS only allows operations to manage users for the TPS subsystem. However, the TPS admin page can also list tokens and display all activities (including normally-hidden administrative actions) performed on the TPS. Figure 2.2. TPS admin page 2.4.3. Agent interfaces The agent services pages are where almost all of the certificate and token management tasks are performed. These services are HTML-based, and agents authenticate to the site using a special agent certificate. Figure 2.3. Certificate Manager's agent services page The operations vary depending on the subsystem: The Certificate Manager agent services include approving certificate requests (which issues the certificates), revoking certificates, and publishing certificates and CRLs. All certificates issued by the CA can be managed through its agent services page. The TPS agent services, like the CA agent services, manages all of the tokens which have been formatted and have had certificates issued to them through the TPS. Tokens can be enrolled, suspended, and deleted by agents. Two other roles (operator and admin) can view tokens in web services pages, but cannot perform any actions on the tokens. KRA agent services pages process key recovery requests, which set whether to allow a certificate to be issued reusing an existing key pair if the certificate is lost. The OCSP agent services page allows agents to configure CAs which publish CRLs to the OCSP, to load CRLs to the OCSP manually, and to view the state of client OCSP requests. The TKS is the only subsystem without an agent services page. 2.4.4. End user pages The CA and TPS both process direct user requests in some way. That means that end users have to have a way to connect with those subsystems. The CA has end-user, or end-entities , HTML services. The TPS uses the Enterprise Security Client. The end-user services are accessed over standard HTTP using the server's host name and the standard port number; they can also be accessed over HTTPS using the server's host name and the specific end-entities TLS port. For CAs, each type of TLS certificate is processed through a specific online submission form, called a profile . There are about two dozen certificate profiles for the CA, covering all sorts of certificates - user TLS certificates, server TLS certificates, log and file signing certificates, email certificates, and every kind of subsystem certificate. There can also be custom profiles. Figure 2.4. Certificate Manager's end-entities page End users retrieve their certificates through the CA pages when the certificates are issued. They can also download CA chains and CRLs and can revoke or renew their certificates through those pages. 2.5. Command-line interfaces This section discusses command-line utilities. 2.5.1. The "pki" CLI The pki command-line interface (CLI) provides access to various services on the server using the REST interface (see 2.3.4 REST Interface in the Planning, Installation and Deployment Guide (Common Criteria Edition) . You can invoke the CLI as follows: Note that the CLI options must be placed before the command, and the command parameters after the command. 2.5.1.1. Initializing the pki CLI To use the command line interface for the first time, specify a new password and use the following command: This will create a new client NSS database in the ~/.dogtag/nssdb directory. The password must be specified in all CLI operations that use the client NSS database. Alternatively, if the password is stored in a file, you can specify the file using the -C option. For example: To import the CA certificate into the client NSS database, refer to 10.5 Importing a certificate into an NSS Database in the Planning, Installation and Deployment Guide (Common Criteria Edition) . Some commands may require client certificate authentication. To import an existing client certificate and its key into the client NSS database, specify the PKCS #12 file and the password, and execute the following command: First, extract the admin client certificate from the .p12 file: Validate and import the admin client certificate as described in Chapter 10 Managing Certificate/Key Crypto Token in the Planning, Installation and Deployment Guide (Common Criteria Edition) : Important Make sure all intermediate certificates and the root CA certificate have been imported before importing the CA admin client certificate. To import an existing client certificate and its key into the client NSS database, specify the PKCS #12 file and the password, and execute the following command: To verify the client certificate, run the following command: 2.5.1.2. Using the "pki" CLI The command line interface supports a number of commands organized in a hierarchical structure. To list the top-level commands, execute the pki command without any additional commands or parameters: Some commands have subcommands. To list them, execute pki with the command name and no additional options. For example: To view command usage information, use the --help option: To view manual pages, specify the help command: To execute a command that does not require authentication, specify the command and its parameters (if required), for example: To execute a command that requires client certificate authentication, specify the certificate nickname, the client NSS database password, and optionally the server URL: For example: By default, the CLI communicates with the server at http:// local_host_name :8080 . To communicate with a server at a different location, specify the URL with the -U option, for example: 2.5.2. AtoB The AtoB utility decodes the Base64-encoded certificates to their binary equivalents. For example: For further details, more options, and additional examples, see the AtoB(1) man page. 2.5.3. AuditVerify The AuditVerify utility verifies integrity of the audit logs by validating the signature on log entries. For example: The example verifies the audit logs using the Log Signing Certificate ( -n ) in the ~jsmith/auditVerifyDir NSS database ( -d ). The list of logs to verify ( -a ) are in the ~jsmith/auditVerifyDir/logListFile file, comma-separated and ordered chronologically. The prefix ( -P ) to prepend to the certificate and key database file names is empty. The output is verbose ( -v ). For further details, more options, and additional examples, see the AuditVerify(1) man page or Section 12.2.1, "Displaying and verifying signed audit logs" . 2.5.4. BtoA The BtoA utility encodes binary data in Base64. For example: For further details, more options, and additional examples, see the BtoA(1) man page. 2.5.5. CMCRequest The CMCRequest utility creates a certificate issuance or revocation request. For example: Note All options to the CMCRequest utility are specified as part of the configuration filed passed to the utility. See the CMCRequest(1) man page for configuration file options and further information. Also see Section 5.3, "Requesting and receiving certificates using CMC" and Section 6.2.1.1, "Revoking a certificate using CMCRequest " . 2.5.6. CMCResponse The CMCResponse utility is used to parse CMC responses returned from CMC issuance or revocation requests. For example: For further details, more options, additional examples, see the CMCResponse(1) man page. Important Running CMCResponse with the "-v" option returns the PEM of each certificate in the chain as Cert:0 , Cert:1 etc. Below all the PEMs, the output also displays each certificate in the chain in pretty print format. Since the certificates are not displayed in a fixed order, in order to determine their position in the chain, you must examine the "Subject:" under each "Certificate" . The corresponding PEM is displayed in the same position above. 2.5.7. CMCRevoke Legacy. Do not use. 2.5.8. CMCSharedToken The CMCSharedToken utility encrypts a user passphrase for shared-secred CMC requests. For example: The shared passphrase ( -s ) is encrypted and stored in the cmcSharedtok2.b64 file ( -o ) using the certificate named subsystemCert cert-pki-tomcat ( -n) found in the NSS database in the current directory ( -d ). The default security token internal is used (as -h is not specified) and the token password of myNSSPassword is used for accessing the token. For further details, more options, and additional examples, see the CMCSharedtoken(1) man page and also Section 8.4.1, "Creating a Shared Secret token" . 2.5.9. CRMFPopClient The CRMFPopClient utility is Certificate Request Message Format (CRMF) client using NSS databases and supplying Proof of Possession. For example: This example creates a new CSR with the cn=subject_name subject DN ( -n) , NSS database in the current directory ( -d ), certificate to use for transport kra.transport ( -b ), the AES/CBC/PKCS5Padding key wrap algorithm verbose output is specified ( -v ) and the resulting CSR is written to the /user_or_entity_database_directory/example.csr file ( -o) . For further details, more options, and additional examples, see the output of the CRMFPopClient --help command and also Section 5.2.3, "Creating a CSR using CRMFPopClient" . 2.5.10. HttpClient The HttpClient utility is an NSS-aware HTTP client for submitting CMC requests. For example: Note All parameters to the HttpClient utility are stored in the request.cfg file. For further information, see the output of the HttpClient --help command. 2.5.11. OCSPClient OCSPClient is an Online Certificate Status Protocol (OCSP) client for checking the certificate revocation status. For example: This example queries the server.example.com OCSP server ( -h ) on port 8080 ( -p ) to check whether the certificate signed by caSigningcet cert-pki-ca ( -c ) with serial number 2 ( --serial ) is valid. The NSS database in the /etc/pki/pki-tomcat/alias directory is used. For further details, more options, and additional examples, see the output of the OCSPClient --help command. 2.5.12. PKCS10Client The PKCS10Client utility creates a CSR in PKCS10 format for RSA and EC keys, optionally on an HSM. For example: This example creates a new RSA ( -a ) key with 2048 bits ( -l ) in the /etc/dirsrv/slapd-instance_name/ directory ( -d with database password password ( -p ). The output CSR is stored in the ~/ds.cfg file ( -o ) and the certificate DN is CN=USDHOSTNAME ( -n ). For further details, more options, and additional examples, see the PKCS10Client(1) man page. 2.5.13. PrettyPrintCert The PrettyPrintCert utility displays the contents of a certificate in a human-readable format. For example: This command parses the output of the ascii_data.cert file and displays its contents in human readable format. The output includes information like signature algorithm, exponent, modulus, and certificate extensions. For further details, more options, and additional examples, see the PrettyPrintCert(1) man page. 2.5.14. PrettyPrintCrl The PrettyPrintCrl utility displays the content of a CRL file in a human readable format. For example: This command parses the output of the ascii_data.crl and displays its contents in human readable format. The output includes information, such as revocation signature algorithm, the issuer of the revocation, and a list of revoked certificates and their reason. For further details, more options, and additional examples, see the PrettyPrintCrl(1) man page. 2.5.15. TokenInfo The TokenInfo utility lists all tokens in an NSS database. For example: This command lists all tokens (HSMs, soft tokens, and so on) registered in the specified database directory. For further details, more options, and additional examples, see the output of the TokenInfo command. 2.5.16. tkstool The tkstool utility is interacting with the token Key Service (TKS) subsystem. For example: This command creates a new master key ( -M ) named new_master ( -n ) in the /var/lib/pki/pki-tomcat/alias NSS database on the HSM token_name . For further details, more options, and additional examples, see the output of the tkstool -H command.
[ "pki -c password -d ~/.redhat-idm-console client-init", "openssl pkcs12 -in file -clcerts -nodes -nokeys -out file.crt", "PKICertImport -d ~/.redhat-idm-console -n \"nickname\" -t \",,\" -a -i file.crt -u C", "pki -c password -d ~/.redhat-idm-console pkcs12-import --pkcs12-file file --pkcs12-password pkcs12-password", "certutil -V -u C -n \"nickname\" -d ~/.redhat-idm-console", "pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:admin_port/subsystem_type", "https://192.0.2.1:8443/ca https://[2001:DB8::1111]:8443/ca", "pki [CLI options] <command> [command parameters]", "pki -c <password> client-init", "pki -C password_file client-init", "openssl pkcs12 -in file -clcerts -nodes -nokeys -out file.crt", "PKICertImport -d ~/.dogtag/nssdb -n \"nickname\" -t \",,\" -a -i file.crt -u C", "pki -c <password> pkcs12-import --pkcs12-file <file> --pkcs12-password <password>", "certutil -V -u C -n \"nickname\" -d ~/.dogtag/nssdb", "pki", "pki ca", "pki ca-cert", "pki --help", "pki ca-cert-find --help", "pki help", "pki help ca-cert-find", "pki ca-cert-find", "pki -U <server URL> -n <nickname> -c <password> <command> [command parameters]", "pki -n jsmith -c password ca-user-find", "pki -U https://server.example.com:8443 -n jsmith -c password ca-user-find", "AtoB input.ascii output.bin", "AuditVerify -d ~jsmith/auditVerifyDir -n Log Signing Certificate -a ~jsmith/auditVerifyDir/logListFile -P \"\" -v", "BtoA input.bin output.ascii", "CMCRequest example.cfg", "*CMCResponse -d /home/agentSmith/certdb_dir -i /home/agentSmith/certdb_dir/cmc.dirsrv_pkcs10.resp -o /home/agentSmith/certdb_dir/Server-Cert.crt*", "CMCSharedToken -d . -p myNSSPassword -s \"shared_passphrase\" -o cmcSharedTok2.b64 -n \"subsystemCert cert-pki-tomcat\"", "CRMFPopClient -d . -p password -n \"cn=subject_name\" -q POP_SUCCESS -b kra.transport -w \"AES/CBC/PKCS5Padding\" -t false -v -o /user_or_entity_database_directory/example.csr", "HttpClient request.cfg", "OCSPClient -h server.example.com -p 8080 -d /etc/pki/pki-tomcat/alias -c \"caSigningCert cert-pki-ca\" --serial 2", "PKCS10Client -d /etc/dirsrv/slapd-instance_name/ -p password -a rsa -l 2048 -o ~/ds.csr -n \"CN=USDHOSTNAME\"", "PrettyPrintCert ascii_data.cert", "PrettyPrintCrl ascii_data.crl", "TokenInfo ./nssdb/", "tkstool -M -n new_master -d /var/lib/pki/pki-tomcat/alias -h token_name" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/user-interfaces
function::user_string_utf16
function::user_string_utf16 Name function::user_string_utf16 - Retrieves UTF-16 string from user memory Synopsis Arguments addr The user address to retrieve the string from Description This function returns a null terminated UTF-8 string converted from the UTF-16 string at a given user memory address. Reports an error on string copy fault or conversion error.
[ "user_string_utf16:string(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-string-utf16
Chapter 12. Configuring AMD SEV Compute nodes to provide memory encryption for instances
Chapter 12. Configuring AMD SEV Compute nodes to provide memory encryption for instances As a cloud administrator, you can provide cloud users the ability to create instances that run on SEV-capable Compute nodes with memory encryption enabled. This feature is available to use from the 2nd Gen AMD EPYCTM 7002 Series ("Rome"). To enable your cloud users to create instances that use memory encryption, you must perform the following tasks: Designate the AMD SEV Compute nodes for memory encryption. Configure the Compute nodes for memory encryption. Deploy the overcloud. Create a flavor or image for launching instances with memory encryption. Tip If the AMD SEV hardware is limited, you can also configure a host aggregate to optimize scheduling on the AMD SEV Compute nodes. To schedule only instances that request memory encryption on the AMD SEV Compute nodes, create a host aggregate of the Compute nodes that have the AMD SEV hardware, and configure the Compute scheduler to place only instances that request memory encryption on the host aggregate. For more information, see Creating and managing host aggregates and Filtering by isolating host aggregates . 12.1. Secure Encrypted Virtualization (SEV) Secure Encrypted Virtualization (SEV), provided by AMD, protects the data in DRAM that a running virtual machine instance is using. SEV encrypts the memory of each instance with a unique key. SEV increases security when you use non-volatile memory technology (NVDIMM), because an NVDIMM chip can be physically removed from a system with the data intact, similar to a hard drive. Without encryption, any stored information such as sensitive data, passwords, or secret keys can be compromised. For more information, see the AMD Secure Encrypted Virtualization (SEV) documentation. Limitations of instances with memory encryption You cannot live migrate, or suspend and resume instances with memory encryption. You cannot use PCI passthrough to directly access devices on instances with memory encryption. You cannot use virtio-blk as the boot disk of instances with memory encryption with Red Hat Enterprise Linux (RHEL) kernels earlier than kernel-4.18.0-115.el8 (RHEL-8.1.0). Note You can use virtio-scsi or SATA as the boot disk, or virtio-blk for non-boot disks. The operating system that runs in an encrypted instance must provide SEV support. For more information, see the Red Hat Knowledgebase solution Enabling AMD Secure Encrypted Virtualization in RHEL 8 . Machines that support SEV have a limited number of slots in their memory controller for storing encryption keys. Each running instance with encrypted memory consumes one of these slots. Therefore, the number of instances with memory encryption that can run concurrently is limited to the number of slots in the memory controller. For example, on 1st Gen AMD EPYCTM 7001 Series ("Naples") the limit is 16, and on 2nd Gen AMD EPYCTM 7002 Series ("Rome") the limit is 255. Instances with memory encryption pin pages in RAM. The Compute service cannot swap these pages, therefore you cannot overcommit memory on a Compute node that hosts instances with memory encryption. You cannot use memory encryption with instances that have multiple NUMA nodes. 12.2. Designating AMD SEV Compute nodes for memory encryption To designate AMD SEV Compute nodes for instances that use memory encryption, you must create a new role file to configure the AMD SEV role, and configure a new overcloud flavor and AMD SEV resource class to use to tag the Compute nodes for memory encryption. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file that includes the ComputeAMDSEV role, along with any other roles that you need for the overcloud. The following example generates the roles data file roles_data_amd_sev.yaml , which includes the roles Controller and ComputeAMDSEV : Open roles_data_amd_sev.yaml and edit or add the following parameters and sections: Section/Parameter Current value New value Role comment Role: Compute Role: ComputeAMDSEV Role name name: Compute name: ComputeAMDSEV description Basic Compute Node role AMD SEV Compute Node role HostnameFormatDefault %stackname%-novacompute-%index% %stackname%-novacomputeamdsev-%index% deprecated_nic_config_name compute.yaml compute-amd-sev.yaml Register the AMD SEV Compute nodes for the overcloud by adding them to your node definition template, node.json or node.yaml . For more information, see Registering nodes for the overcloud in the Director Installation and Usage guide. Inspect the node hardware: For more information, see Creating an inventory of the bare-metal node hardware in the Director Installation and Usage guide. Create the compute-amd-sev overcloud flavor for AMD SEV Compute nodes: Replace <ram_size_mb> with the RAM of the bare metal node, in MB. Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB. Replace <no_vcpus> with the number of CPUs on the bare metal node. Note These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size. Retrieve a list of your nodes to identify their UUIDs: Tag each bare metal node that you want to designate for memory encryption with a custom AMD SEV resource class: Replace <node> with the ID of the bare metal node. Associate the compute-amd-sev flavor with the custom AMD SEV resource class: To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal service node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with CUSTOM_ . Note A flavor can request only one instance of a bare metal resource class. Set the following flavor properties to prevent the Compute scheduler from using the bare metal flavor properties to schedule instances: Optional: If the network topology of the ComputeAMDSEV role is different from the network topology of your Compute role, then create a custom network interface template. For more information, see Custom network interface templates in the Advanced Overcloud Customization guide. If the network topology of the ComputeAMDSEV role is the same as the Compute role, then you can use the default network topology defined in compute.yaml . Register the Net::SoftwareConfig of the ComputeAMDSEV role in your network-environment.yaml file: Replace <amd_sev_net_top> with the name of the file that contains the network topology of the ComputeAMDSEV role, for example, compute.yaml to use the default network topology. Add the following parameters to the node-info.yaml file to specify the number of AMD SEV Compute nodes, and the flavor that you want to use for the AMD SEV designated Compute nodes: To verify that the role was created, enter the following command: Example output: 12.3. Configuring AMD SEV Compute nodes for memory encryption To enable your cloud users to create instances that use memory encryption, you must configure the Compute nodes that have the AMD SEV hardware. Prerequisites Your deployment must include a Compute node that runs on AMD hardware capable of supporting SEV, such as an AMD EPYC CPU. You can use the following command to determine if your deployment is SEV-capable: Procedure Open your Compute environment file. Optional: Add the following configuration to your Compute environment file to specify the maximum number of memory-encrypted instances that the AMD SEV Compute nodes can host concurrently: Note The default value of the libvirt/num_memory_encrypted_guests parameter is none . If you do not set a custom value, the AMD SEV Compute nodes do not impose a limit on the number of memory-encrypted instances that the nodes can host concurrently. Instead, the hardware determines the maximum number of memory-encrypted instances that the AMD SEV Compute nodes can host concurrently, which might cause some memory-encrypted instances to fail to launch. Optional: To specify that all x86_64 images use the q35 machine type by default, add the following configuration to your Compute environment file: If you specify this parameter value, you do not need to set the hw_machine_type property to q35 on every AMD SEV instance image. To ensure that the AMD SEV Compute nodes reserve enough memory for host-level services to function, add 16MB for each potential AMD SEV instance: Configure the kernel parameters for the AMD SEV Compute nodes: Note When you first add the KernelArgs parameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to define KernelArgs . Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 12.4. Creating an image for memory encryption When the overcloud contains AMD SEV Compute nodes, you can create an AMD SEV instance image that your cloud users can use to launch instances that have memory encryption. Procedure Create a new image for memory encryption: Note If you use an existing image, the image must have the hw_firmware_type property set to uefi . Optional: Add the property hw_mem_encryption=True to the image to enable AMD SEV memory encryption on the image: Tip You can enable memory encryption on the flavor. For more information, see Creating a flavor for memory encryption . Optional: Set the machine type to q35 , if not already set in the Compute node configuration: Optional: To schedule memory-encrypted instances on a SEV-capable host aggregate, add the following trait to the image extra specs: Tip You can also specify this trait on the flavor. For more information, see Creating a flavor for memory encryption . 12.5. Creating a flavor for memory encryption When the overcloud contains AMD SEV Compute nodes, you can create one or more AMD SEV flavors that your cloud users can use to launch instances that have memory encryption. Note An AMD SEV flavor is necessary only when the hw_mem_encryption property is not set on an image. Procedure Create a flavor for memory encryption: To schedule memory-encrypted instances on a SEV-capable host aggregate, add the following trait to the flavor extra specs: 12.6. Launching an instance with memory encryption To verify that you can launch instances on an AMD SEV Compute node with memory encryption enabled, use a memory encryption flavor or image to create an instance. Procedure Create an instance by using an AMD SEV flavor or image. The following example creates an instance by using the flavor created in Creating a flavor for memory encryption and the image created in Creating an image for memory encryption : Log in to the instance as a cloud user. To verify that the instance uses memory encryption, enter the following command from the instance:
[ "[stack@director ~]USD source ~/stackrc", "(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_amd_sev.yaml Compute:ComputeAMDSEV Controller", "(undercloud)USD openstack overcloud node introspect --all-manageable --provide", "(undercloud)USD openstack flavor create --id auto --ram <ram_size_mb> --disk <disk_size_gb> --vcpus <no_vcpus> compute-amd-sev", "(undercloud)USD openstack baremetal node list", "(undercloud)USD openstack baremetal node set --resource-class baremetal.AMD-SEV <node>", "(undercloud)USD openstack flavor set --property resources:CUSTOM_BAREMETAL_AMD_SEV=1 compute-amd-sev", "(undercloud)USD openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 compute-amd-sev", "resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::ComputeCPUPinning::Net::SoftwareConfig: /home/stack/templates/nic-configs/<amd_sev_net_top>.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml", "parameter_defaults: OvercloudComputeAMDSEVFlavor: compute-amd-sev ComputeAMDSEVCount: 3", "(undercloud)USD openstack baremetal node list --long -c \"UUID\" -c \"Instance UUID\" -c \"Resource Class\" -c \"Provisioning State\" -c \"Power State\" -c \"Last Error\" -c \"Fault\" -c \"Name\" -f json", "[ { \"Fault\": null, \"Instance UUID\": \"e8e60d37-d7c7-4210-acf7-f04b245582ea\", \"Last Error\": null, \"Name\": \"compute-0\", \"Power State\": \"power on\", \"Provisioning State\": \"active\", \"Resource Class\": \"baremetal.AMD-SEV\", \"UUID\": \"b5a9ac58-63a7-49ba-b4ad-33d84000ccb4\" }, { \"Fault\": null, \"Instance UUID\": \"3ec34c0b-c4f5-4535-9bd3-8a1649d2e1bd\", \"Last Error\": null, \"Name\": \"compute-1\", \"Power State\": \"power on\", \"Provisioning State\": \"active\", \"Resource Class\": \"compute\", \"UUID\": \"432e7f86-8da2-44a6-9b14-dfacdf611366\" }, { \"Fault\": null, \"Instance UUID\": \"4992c2da-adde-41b3-bef1-3a5b8e356fc0\", \"Last Error\": null, \"Name\": \"controller-0\", \"Power State\": \"power on\", \"Provisioning State\": \"active\", \"Resource Class\": \"controller\", \"UUID\": \"474c2fc8-b884-4377-b6d7-781082a3a9c0\" } ]", "lscpu | grep sev", "parameter_defaults: ComputeAMDSEVExtraConfig: nova::config::nova_config: libvirt/num_memory_encrypted_guests: value: 15", "parameter_defaults: ComputeAMDSEVParameters: NovaHWMachineType: x86_64=q35", "parameter_defaults: ComputeAMDSEVParameters: NovaReservedHostMemory: <libvirt/num_memory_encrypted_guests * 16>", "parameter_defaults: ComputeAMDSEVParameters: KernelArgs: \"hugepagesz=1GB hugepages=32 default_hugepagesz=1GB mem_encrypt=on kvm_amd.sev=1\"", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "(overcloud)USD openstack image create ... --property hw_firmware_type=uefi amd-sev-image", "(overcloud)USD openstack image set --property hw_mem_encryption=True amd-sev-image", "(overcloud)USD openstack image set --property hw_machine_type=q35 amd-sev-image", "(overcloud)USD openstack image set --property trait:HW_CPU_X86_AMD_SEV=required amd-sev-image", "(overcloud)USD openstack flavor create --vcpus 1 --ram 512 --disk 2 --property hw:mem_encryption=True m1.small-amd-sev", "(overcloud)USD openstack flavor set --property trait:HW_CPU_X86_AMD_SEV=required m1.small-amd-sev", "(overcloud)USD openstack server create --flavor m1.small-amd-sev --image amd-sev-image amd-sev-instance", "dmesg | grep -i sev AMD Secure Encrypted Virtualization (SEV) active" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-amd-sev-compute-nodes-to-provide-memory-encryption-for-instances_amd-sev
Installing on IBM Z and IBM LinuxONE
Installing on IBM Z and IBM LinuxONE OpenShift Container Platform 4.15 Installing OpenShift Container Platform on IBM Z and IBM LinuxONE Red Hat OpenShift Documentation Team
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3", "coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda \\ 1 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 3 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 \\ 4 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 5", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ipl c", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3", "coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda \\ 1 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 3 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 \\ 4 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 5", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ipl c", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "prot_virt: Reserving <amount>MB as ultravisor base storage.", "cat /sys/firmware/uv/prot_virt_host", "1", "{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```", "base64 <your-hostkey>.crt", "gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign", "[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.", "Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key", "variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2", "coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img", "rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 1 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000", "qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}", "virt-install --noautoconsole --connect qemu:///system --name {vm_name} --memory {memory} --vcpus {vcpus} --disk {disk} --launchSecurity type=\"s390-pv\" \\ 1 --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 2", "virt-install --connect qemu:///system --name {vm_name} --vcpus {vcpus} --memory {memory_mb} --disk {vm_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vm_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "prot_virt: Reserving <amount>MB as ultravisor base storage.", "cat /sys/firmware/uv/prot_virt_host", "1", "{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```", "base64 <your-hostkey>.crt", "gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign", "[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.", "Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key", "variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2", "coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img", "rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 1 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000", "qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}", "virt-install --noautoconsole --connect qemu:///system --name {vm_name} --memory {memory} --vcpus {vcpus} --disk {disk} --launchSecurity type=\"s390-pv\" \\ 1 --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 2", "virt-install --connect qemu:///system --name {vm_name} --vcpus {vcpus} --memory {memory_mb} --disk {vm_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vm_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3", "coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda \\ 1 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 3 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 \\ 4 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 5", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3", "coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda \\ 1 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 3 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 \\ 4 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 5", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/installing_on_ibm_z_and_ibm_linuxone/index
Chapter 6. Installing a private cluster on IBM Power Virtual Server
Chapter 6. Installing a private cluster on IBM Power Virtual Server In OpenShift Container Platform version 4.14, you can install a private cluster into an existing VPC and IBM Power(R) Virtual Server Workspace. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Important IBM Power(R) Virtual Server using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 6.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Create a DNS zone using IBM Cloud(R) DNS Services and specify it as the base domain of the cluster. For more information, see "Using IBM Cloud(R) DNS Services to configure DNS resolution". Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 6.3. Private clusters in IBM Power Virtual Server To create a private cluster on IBM Power(R) Virtual Server, you must provide an existing private Virtual Private Cloud (VPC) and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the IBM Cloud(R) APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public Ingress A public DNS zone that matches the baseDomain for the cluster You will also need to create an IBM(R) DNS service containing a DNS zone that matches your baseDomain . Unlike standard deployments on Power VS which use IBM(R) CIS for DNS, you must use IBM(R) DNS for your DNS service. 6.3.1. Limitations Private clusters on IBM Power(R) Virtual Server are subject only to the limitations associated with the existing VPC that was used for cluster deployment. 6.4. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 6.4.1. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 6.4.2. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 6.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.8. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 6.9. Manually creating the installation configuration file When installing a private OpenShift Container Platform cluster, you must manually generate the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 6.9.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.9.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" region: powervs-region vpcName: name-of-existing-vpc 9 cloudConnectionName: powervs-region-example-cloud-con-priv vpcSubnets: - powervs-region-example-subnet-1 vpcRegion : vpc-region zone: powervs-zone serviceInstanceID: "powervs-region-service-instance-id" publish: Internal 10 pullSecret: '{"auths": ...}' 11 sshKey: ssh-ed25519 AAAA... 12 1 4 If you do not provide these parameters and values, the installation program provides the default value. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections currently define a single machine pool. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 7 The machine CIDR must contain the subnets for the compute machines and control plane machines. 8 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 9 Specify the name of an existing VPC. 10 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster. 11 Required. The installation program prompts you for this value. 12 Provide the sshKey value that you use to access the machines in your cluster. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.10. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 6.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 6.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 6.15. steps Customize your cluster Optional: Opt out of remote health reporting
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcName: name-of-existing-vpc 9 cloudConnectionName: powervs-region-example-cloud-con-priv vpcSubnets: - powervs-region-example-subnet-1 vpcRegion : vpc-region zone: powervs-zone serviceInstanceID: \"powervs-region-service-instance-id\" publish: Internal 10 pullSecret: '{\"auths\": ...}' 11 sshKey: ssh-ed25519 AAAA... 12", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_power_virtual_server/installing-ibm-power-vs-private-cluster
Chapter 3. Using Camel JBang
Chapter 3. Using Camel JBang Camel Jbang is a JBang-based Camel application for running Camel routes. 3.1. Installing Camel JBang Prerequisites JBang must be installed on your machine. See instructions on how to download and install the JBang. After the JBang is installed, you can verify JBang is working by executing the following command from a command shell: jbang version This outputs the version of installed JBang. Procedure Run the following command to install the Camel JBang application: jbang app install camel@apache/camel This installs the Apache Camel as the camel command within JBang. This means that you can run Camel from the command line by just executing camel command. 3.2. Using Camel JBang The Camel JBang supports multiple commands. The camel help command can display all the available commands. camel --help Note The first time you run this command, it may cause dependencies to be cached, therefore taking a few extra seconds to run. If you are already using JBang and you get errors such as Exception in thread "main" java.lang.NoClassDefFoundError: "org/apache/camel/dsl/jbang/core/commands/CamelJBangMain" , try clearing the JBang cache and re-install again. All the commands support the --help and will display the appropriate help if that flag is provided. 3.2.1. Enable shell completion Camel JBang provides shell completion for bash and zsh out of the box. To enable shell completion for Camel JBang, run: source <(camel completion) To make it permanent, run: echo 'source <(camel completion)' >> ~/.bashrc 3.3. Creating and running Camel routes You can create a new basic routes with the init command. For example to create an XML route, run the following command: camel init cheese.xml This creates the file cheese.xml (in the current directory) with a sample route. To run the file, run: camel run cheese.xml Note You can create and run any of the supported DSLs in Camel such as YAML, XML, Java, Groovy. To create a new .java route, run: camel init foo.java When you use the init command, Camel by default creates the file in the current directory. However, you can use the --directory option to create the file in the specified directory. For example to create in a folder named foobar , run: camel init foo.java --directory=foobar Note When you use the --directory option, Camel automatically cleans this directory if already exists. 3.3.1. Running routes from multiple files You can run routes from more than one file, for example to run two YAML files: camel run one.yaml two.yaml You can run routes from two different files such as yaml and Java: camel run one.yaml hello.java You can use wildcards (i.e. * ) to match multiple files, such as running all the yaml files: camel run *.yaml You can run all files starting with foo*: camel run foo* To run all the files in the directory, use: camel run * Note The run goal can also detect files that are properties , such as application.properties . 3.3.2. Running routes from input parameter For very small Java routes, it is possible to provide the route as CLI argument, as shown below: camel run --code='from("kamelet:beer-source").to("log:beer")' This is very limited as the CLI argument is a bit cumbersome to use than files. When you run the routes from input parameter, remember that: Only Java DSL code is supported. Code is wrapped in single quote, so you can use double quote in Java DSL. Code is limited to what literal values possible to provide from the terminal and JBang. All route(s) must be defined in a single --code parameter. Note Using --code is only usable for very quick and small prototypes. 3.3.3. Dev mode with live reload You can enable the dev mode that comes with live reload of the route(s) when the source file is updated (saved), using the --dev options as shown: camel run foo.yaml --dev Then while the Camel integration is running, you can update the YAML route and update when saving. This option works for all DLS including java , for example: camel run hello.java --dev Note The live reload option is meant for development purposes only, and if you encounter problems with reloading such as JVM class loading issues, then you may need to restart the integration. 3.3.4. Developer Console You can enable the developer console, which presents a variety of information to the developer. To enable the developer console, run: camel run hello.java --console The console is then accessible from a web browser at http://localhost:8080/q/dev (by default). The link is also displayed in the log when the Camel is starting up. The console can give you insights into your running Camel integration, such as reporting the top routes that takes the longest time to process messages. You can then identify the slowest individual EIPs in these routes. The developer console can also output the data in JSON format, that can be used by 3rd-party tooling to capture the information. For example, to output the top routes via curl, run: curl -s -H "Accept: application/json" http://0.0.0.0:8080/q/dev/top/ If you have jq installed, that can format and output the JSON data in colour, run: curl -s -H "Accept: application/json" http://0.0.0.0:8080/q/dev/top/ | jq 3.3.5. Using profiles A profile in Camel JBang is a name (id) that refers to the configuration that is loaded automatically with Camel JBang. The default profile is named as the application which is a (smart default) to let Camel JBang automatic load application.properties (if present). This means that you can create profiles that match to a specific properties file with the same name. For example, running with a profile named local means that Camel JBang will load local.properties instead of application.properties . To use a profile, specify the command line option --profile as shown: camel run hello.java --profile=local You can only specify one profile name at a time, for example, --profile=local,two is not valid. In the properties files you can configure all the configurations from Camel Main . To turn off and enable log masking run the following command: camel.main.streamCaching=false camel.main.logMask=true You can also configure Camel components such as camel-kafka to declare the URL to the brokers: camel.component.kafka.brokers=broker1:9092,broker2:9092,broker3:9092 Note Keys starting with camel.jbang are reserved keys that are used by Camel JBang internally, and allow for pre-configuring arguments for Camel JBang commands. 3.3.6. Downloading JARs over the internet By default, Camel JBang automatically resolves the dependencies needed to run Camel, this is done by JBang and Camel respectively. Camel itself detects at runtime if a component has a need for the JARs that are not currently available on the classpath, and can then automatically download the JARs. Camel downloads these JARs in the following order: from the local disk in ~/.m2/repository from the internet in Maven Central from internet in the custom 3rd-party Maven repositories from all the repositories found in active profiles of ~/.m2/settings.xml or a settings file specified using --maven-settings option. If you do not want the Camel JBang to download over the internet, you can turn this off with the --download option, as shown: camel run foo.java --download=false 3.3.7. Adding custom JARs Camel JBang automatically detects the dependencies for the Camel components, languages, and data formats from its own release. This means that it is not necessary to specify which JARs to use. However, if you need to add 3rd-party custom JARs then you can specify these with the --deps as CLI argument in Maven GAV syntax ( groupId:artifactId:version ), such as: camel run foo.java --deps=com.foo:acme:1.0 camel run foo.java --deps=camel-saxon You can specify multiple dependencies separated by comma: camel run foo.java --deps=camel-saxon,com.foo:acme:1.0 3.3.8. Using 3rd-party Maven repositories Camel JBang downloads from the local repository first, and then from the online Maven Central repository. To download from the 3rd-party Maven repositories, you must specify this as CLI argument, or in the application.properties file. camel run foo.java --repos=https://packages.atlassian.com/maven-external Note You can specify multiple repositories separated by comma. The configuration for the 3rd-party Maven repositories is configured in the application.properties file with the key camel.jbang.repos as shown: camel.jbang.repos=https://packages.atlassian.com/maven-external When you run Camel route, the application.properties is automatically loaded: camel run foo.java You can also explicitly specify the properties file to use: camel run foo.java application.properties Or you can specify this as a profile: camel run foo.java --profile=application Where the profile id is the name of the properties file. 3.3.9. Configuration of Maven usage By default, the existing ~/.m2/settings.xml file is loaded, so it is possible to alter the behavior of the Maven resolution process. Maven settings file provides the information about the Maven mirrors, credential configuration (potentially encrypted) or active profiles and additional repositories. Maven repositories can use authentication and the Maven-way to configure credentials is through <server> elements: <server> <id>external-repository</id> <username>camel</username> <password>{SSVqy/PexxQHvubrWhdguYuG7HnTvHlaNr6g3dJn7nk=}</password> </server> While the password may be specified using plain text, it si recommended to configure the maven master password first and then use it to configure repository password: USD mvn -emp Master password: camel {hqXUuec2RowH8dA8vdqkF6jn4NU9ybOsDjuTmWvYj4U=} The above password must be added to ~/.m2/settings-security.xml file as shown: <settingsSecurity> <master>{hqXUuec2RowH8dA8vdqkF6jn4NU9ybOsDjuTmWvYj4U=}</master> </settingsSecurity> Then you can configure a normal password: USD mvn -ep Password: camel {SSVqy/PexxQHvubrWhdguYuG7HnTvHlaNr6g3dJn7nk=} Then you can use this password in the <server>/<password> configuration. By default, Maven reads the master password from ~/.m2/settings-security.xml file, but you can override it. Location of the settings.xml file itself can be specified as shown: camel run foo.java --maven-settings=/path/to/settings.xml --maven-settings-security=/path/to/settings-security.xml If you want to run Camel application without assuming any location (even ~/.m2/settings.xml ), use this option: camel run foo.java --maven-settings=false 3.3.10. Running routes hosted on GitHub You can run a route that is hosted on the GitHub using the Camels resource loader. For example, to run one of the Camel K examples, use: camel run github:apache:camel-kamelets-examples:jbang/hello-java/Hey.java You can also use the https URL for the GitHub. For example, you can browse the examples from a web-browser and then copy the URL from the browser window and run the example with Camel JBang: camel run https://github.com/apache/camel-kamelets-examples/tree/main/jbang/hello-java You can also use wildcards (i.e. \* ) to match multiple files, such as running all the groovy files: camel run https://github.com/apache/camel-kamelets-examples/tree/main/jbang/languages/*.groovy Or you can run all files starting with rou*: camel run https://github.com/apache/camel-kamelets-examples/tree/main/jbang/languages/rou* 3.3.10.1. Running routes from the GitHub gists Using the gists from the GitHub is a quick way to share the small Camel routes that you can easily run. For example to run a gist, use: camel run https://gist.github.com/davsclaus/477ddff5cdeb1ae03619aa544ce47e92 A gist can contain one or more files, and Camel JBang will gather all relevant files, so a gist can contain multiple routes, properties files, and Java beans. 3.3.11. Downloading routes hosted on the GitHub You can use Camel JBang to download the existing examples from GitHub to local disk, which allows to modify the example and to run locally. For example, you can download the dependency injection example by running the following command: camel init https://github.com/apache/camel-kamelets-examples/tree/main/jbang/dependency-injection Then the files (not sub folders) are downloaded to the current directory. You can then run the example locally with: camel run * You can also download to the files to a new folder using the --directory option, for example to download the files to a folder named myproject , run: camel init https://github.com/apache/camel-kamelets-examples/tree/main/jbang/dependency-injection --directory=myproject Note When using --directory option, Camel will automatically clean this directory if already exists. You can run the example in dev mode, to hot-deploy on the source code changes. camel run * --dev You can download a single file, for example, to download one of the Camel K examples, run: camel init https://github.com/apache/camel-k-examples/blob/main/generic-examples/languages/simple.groovy This is a groovy route, which you can run with (or use * ): camel run simple.groovy 3.3.11.1. Downloading routes form GitHub gists You can download the files from the gists as shown: camel init https://gist.github.com/davsclaus/477ddff5cdeb1ae03619aa544ce47e92 This downloads the files to local disk, which you can run afterwards: camel run * You can download to a new folder using the --directory option, for example, to download to a folder named foobar , run: camel init https://gist.github.com/davsclaus/477ddff5cdeb1ae03619aa544ce47e92 --directory=foobar Note When using --directory option, Camel automatically cleans this directory if already exists. 3.3.12. Using a specific Camel version You can specify which Camel version to run as shown: jbang run -Dcamel.jbang.version=3.20.1 camel@apache/camel [command] Note Older versions of Camel may not work as well with Camel JBang as the newest versions. It is recommended to use the versions starting from Camel 3.18 onwards. You can also try bleeding edge development by using SNAPSHOT such as: jbang run --fresh -Dcamel.jbang.version=3.20.1-SNAPSHOT camel@apache/camel [command] 3.3.13. Running the Camel K integrations or bindings Camel supports running the Camel K integrations and binding files, that are in the CRD format (Kubernetes Custom Resource Definitions).For example, to run a kamelet binding file named joke.yaml : #!/usr/bin/env jbang camel@apache/camel run apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: joke spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: chuck-norris-source properties: period: 2000 sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: log-sink properties: show-headers: false camel run joke.yaml 3.3.14. Run from the clipboard You can run the Camel routes directly from the OS clipboard. This allows to copy some code, and then quickly run the route. camel run clipboard.<extension> Where <extension> is the type of the content of the clipboard is, such as java , xml , or yaml . For example, you can copy this to your clipboard and then run the route: <route> <from uri="timer:foo"/> <log message="Hello World"/> </route> camel run clipboard.xml 3.3.15. Controlling the local Camel integrations To list the Camel integrations that are currently running, use the ps option: camel ps PID NAME READY STATUS AGE 61818 sample.camel.MyCamelApplica... 1/1 Running 26m38s 62506 test1 1/1 Running 4m34s This lists the PID, the name and age of the integration. You can use the stop command to stop any of these running Camel integrations. For example to stop the test1 , run: camel stop test1 Stopping running Camel integration (pid: 62506) You can use the PID to stop the integration: camel stop 62506 Stopping running Camel integration (pid: 62506) Note You do not have to type the full name, as the stop command will match the integrations that starts with the input, for example you can type camel stop t to stop all integrations starting with t . To stop all integrations, use the --all option as follows: camel stop --all Stopping running Camel integration (pid: 61818) Stopping running Camel integration (pid: 62506) 3.3.16. Controlling the Spring Boot and Quarkus integrations The Camel JBang CLI by default only controls the Camel integrations that are running using the CLI, for example, camel run foo.java . For the CLI to be able to control and manage the Spring Boot or Quarkus applications, you need to add a dependency to these projects to integrate with the Camel CLI. Spring Boot In the Spring Boot application, add the following dependency: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cli-connector-starter</artifactId> </dependency> Quarkus In the Quarkus application, add the following dependency: <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-cli-connector</artifactId> </dependency> 3.3.17. Getting the status of Camel integrations The get command in the Camel JBang is used for getting the Camel specific status for one or all of the running Camel integrations. To display the status of the running Camel integrations, run: camel get PID NAME CAMEL PLATFORM READY STATUS AGE TOTAL FAILED INFLIGHT SINCE-LAST 61818 MyCamel 3.20.1-SNAPSHOT Spring Boot v2.7.3 1/1 Running 28m34s 854 0 0 0s/0s/- 63051 test1 3.20.1-SNAPSHOT JBang 1/1 Running 18s 14 0 0 0s/0s/- 63068 mygroovy 3.20.1-SNAPSHOT JBang 1/1 Running 5s 2 0 0 0s/0s/- The camel get command displays the default integrations, which is equivalent to typing the camel get integrations or the camel get int commands. This displays the overall information for the every Camel integration, where you can see the total number of messages processed. The column Since Last shows how long time ago the last processed message for three stages (started/completed/failed). The value of 0s/0s/- means that the last started and completed message just happened (0 seconds ago), and that there has not been any failed message yet. In this example, 9s/9s/1h3m means that last started and completed message is 9 seconds ago, and last failed is 1 hour and 3 minutes ago. You can also see the status of every routes, from all the local Camel integrations with camel get route : camel get route PID NAME ID FROM STATUS AGE TOTAL FAILED INFLIGHT MEAN MIN MAX SINCE-LAST 61818 MyCamel hello timer://hello?period=2000 Running 29m2s 870 0 0 0 0 14 0s/0s/- 63051 test1 java timer://java?period=1000 Running 46s 46 0 0 0 0 9 0s/0s/- 63068 mygroovy groovy timer://groovy?period=1000 Running 34s 34 0 0 0 0 5 0s/0s/- Note Use camel get --help to display all the available commands. 3.3.17.1. Top status of the Camel integrations The camel top command is used for getting top utilization statistics (highest to lowest heap used memory) of the running Camel integrations. camel top PID NAME JAVA CAMEL PLATFORM STATUS AGE HEAP NON-HEAP GC THREADS CLASSES 22104 chuck 11.0.13 3.20.1-SNAPSHOT JBang Running 2m10s 131/322/4294 MB 70/73 MB 17ms (6) 7/8 7456/7456 14242 MyCamel 11.0.13 3.20.1-SNAPSHOT Spring Boot v2.7.3 Running 33m40s 115/332/4294 MB 62/66 MB 37ms (6) 16/16 8428/8428 22116 bar 11.0.13 3.20.1-SNAPSHOT JBang Running 2m7s 33/268/4294 MB 54/58 MB 20ms (4) 7/8 6104/6104 The HEAP column shows the heap memory (used/committed/max) and the non-heap (used/committed). The GC column shows the garbage collection information (time and total runs). The CLASSES column shows the number of classes (loaded/total). You can also see the top performing routes (highest to lowest mean processing time) of every routes, from all the local Camel integrations with camel top route : camel top route PID NAME ID FROM STATUS AGE TOTAL FAILED INFLIGHT MEAN MIN MAX SINCE-LAST 22104 chuck chuck-norris-source-1 timer://chuck?period=10000 Started 10s 1 0 0 163 163 163 9s 22116 bar route1 timer://yaml2?period=1000 Started 7s 7 0 0 1 0 11 0s 22104 chuck chuck kamelet://chuck-norris-source Started 10s 1 0 0 0 0 0 9s 22104 chuck log-sink-2 kamelet://source?routeId=log-sink-2 Started 10s 1 0 0 0 0 0 9s 14242 MyCamel hello timer://hello?period=2000 Started 31m41s 948 0 0 0 0 4 0s Note Use camel top --help to display all the available commands. 3.3.17.2. Starting and Stopping the routes The camel cmd is used for executing the miscellaneous commands in the running Camel integrations, for example, the commands to start and stop the routes. To stop all the routes in the chuck integration, run: camel cmd stop-route chuck The status will be then changed to Stopped for the chuck integration: camel get route PID NAME ID FROM STATUS AGE TOTAL FAILED INFLIGHT MEAN MIN MAX SINCE-LAST 81663 chuck chuck kamelet://chuck-norris-source Stopped 600 0 0 0 0 1 4s 81663 chuck chuck-norris-source-1 timer://chuck?period=10000 Stopped 600 0 0 65 52 290 4s 81663 chuck log-sink-2 kamelet://source?routeId=log-sink-2 Stopped 600 0 0 0 0 1 4s 83415 bar route1 timer://yaml2?period=1000 Started 5m30s 329 0 0 0 0 10 0s 83695 MyCamel hello timer://hello?period=2000 Started 3m52s 116 0 0 0 0 9 1s To start the route, run: camel cmd start-route chuck To stop all the routes in every the Camel integration, use the --all flag as follows: camel cmd stop-route --all To start all the routes, use: camel cmd start-route --all Note You can stop one or more route by their ids by separating them using comma, for example, camel cmd start-route --id=route1,hello . Use the camel cmd start-route --help command for more details. 3.3.17.3. Configuring the logging levels You can see the current logging levels of the running Camel integrations by: camel cmd logger PID NAME AGE LOGGER LEVEL 90857 bar 2m48s root INFO 91103 foo 20s root INFO The logging level can be changed at a runtime. For example, to change the level for the foo to DEBUG, run: camel cmd logger --level=DEBUG foo Note You can use --all to change logging levels for all running integrations. 3.3.17.4. Listing services Some Camel integrations may host a service which clients can call, such as REST, or SOAP-WS, or socket-level services using TCP protocols. You can list the available services as shown in the example below: camel get service PID NAME COMPONENT PROTOCOL SERVICE 1912 netty netty tcp tcp:localhost:4444 2023 greetings platform-http rest http://0.0.0.0:7777/camel/greetings/{name} (GET) 2023 greetings platform-http http http://0.0.0.0:7777/q/dev Here, you can see the two Camel integrations. The netty integration hosts a TCP service that is available on port 4444. The other Camel integration hosts a REST service that can be called via GET only. The third integration comes with embedded web console (started with the --console option). Note For a service to be listed the Camel components must be able to advertise the services using Camel Console . 3.3.17.5. Listing state of Circuit Breakers If your Camel integration uses the link:https://camel.apache.org/components/3.22.x/eips/circuitBreaker-eip.html [Circuit Breaker], then you can output the status of the breakers with Camel JBang as follows: camel get circuit-breaker PID NAME COMPONENT ROUTE ID STATE PENDING SUCCESS FAIL REJECT 56033 mycb resilience4j route1 circuitBreaker1 HALF_OPEN 5 2 3 0 Here we can see the circuit breaker is in half open state, that is a state where the breaker is attempting to transition back to closed, if the failures start to drop. Note You can run the command with watch option to show the latest state, for example, watch camel get circuit-breaker . 3.3.18. Using Jolokia and Hawtio The web console allows inspecting running the Camel integrations, such as all the JMX management information, and not but least to visualize the Camel routes with live performance metrics. To allow Hawtio to inspect the Camel integrations, the Jolokia JVM Agent must be installed in the running integration. This is done explicitly as follows: camel ps PID NAME READY STATUS AGE 61818 sample.camel.MyCamelApplica... 1/1 Running 26m38s 62506 test1.java 1/1 Running 4m34s With the PID, you can then attach Jolokia: camel jolokia 62506 Started Jolokia for PID 62506 http://127.0.0.1:8778/jolokia/ Instead of using the PID you can also attach by the name pattern. In this example, the two Camel integrations have unique names (foo and test1), you can attach Jolokia without the PID as follows: camel jolokia te Started Jolokia for PID 62506 http://127.0.0.1:8778/jolokia/ Then you can launch the Hawtio using Camel JBang: camel hawtio This will automatically download and start the Hawtio, and then open in the web browser. Note See camel hawtio --help for more options. When the Hawtio launches in the web browser, click the Discover tab which lists the all local available Jolokia Agents. You can use camel jolokia PID to connect to multiple different Camel integrations and from this list select which to load. Click the green lightning icon to connect to the specific running Camel integration. You can uninstall the Jolokia JVM Agent in a running Camel integration when no longer needed: camel jolokia 62506 --stop Stopped Jolokia for PID 62506 It is also possible to achieve this with only one command, as follows: camel hawtio test1 Where test1 is the name of the running Camel integration. When you stop Hawtio (using ctrl + c ), then Camel will attempt to uninstall the Jolokia JVM Agent, however this is not successful sometimes, because the JVM is being terminated which can prevent camel-jbang from doing JVM process communication to the running Camel integration. 3.3.19. Scripting from the terminal using pipes You can execute a Camel JBang file as a script that is used for terminal scripting with pipes and filters. Note Every time the script is executed a JVM is started with Camel. This is not very fast or low on memory usage, so use the Camel JBang terminal scripting, for example, to use the many Camel components or Kamelets to more easily send or receive data from disparate IT systems. This requires to add the following line in top of the file, for example, as in the upper.yaml file below: ///usr/bin/env jbang --quiet camel@apache/camel pipe "USD0" "USD@" ; exit USD? # Will upper-case the input - from: uri: "stream:in" steps: - setBody: simple: "USD{body.toUpperCase()}" - to: "stream:out" To execute this as a script, you need to set the execute file permission: chmod +x upper.yaml Then you can then execute this as a script: echo "Hello\nWorld" | ./upper.yaml This outputs: HELLO WORLD You can turn on the logging using --logging=true which then logs to .camel-jbang/camel-pipe.log file. The name of the logging file cannot be configured. echo "Hello\nWorld" | ./upper.yaml --logging=true 3.3.19.1. Using stream:in with line vs raw mode When using stream:in to read data from System in then the Stream Component works in two modes: line mode (default) - reads input as single lines (separated by line breaks). Message body is a String . raw mode - reads the entire stream until end of stream . Message body is a byte[] . Note The default mode is due to historically how the stream component was created. Therefore, you may want to set stream:in?readLine=false to use raw mode. 3.3.20. Running local Kamelets You can use Camel JBang to try local Kamelets, without the need to publish them on GitHub or package them in a jar. camel run --local-kamelet-dir=/path/to/local/kamelets earthquake.yaml Note When the kamelets are from local file system, then they can be live reloaded, if they are updated, when you run Camel JBang in --dev mode. You can also point to a folder in a GitHub repository. For example: camel run --local-kamelet-dir=https://github.com/apache/camel-kamelets-examples/tree/main/custom-kamelets user.java Note If a kamelet is loaded from GitHub, then they cannot be live reloaded. 3.3.21. Using the platform-http component When a route is started from platform-http then the Camel JBang automatically includes a VertX HTTP server running on port 8080. following example shows the route in a file named server.yaml : - from: uri: "platform-http:/hello" steps: - set-body: constant: "Hello World" You can run this example with: camel run server.yaml And then call the HTTP service with: USD curl http://localhost:8080/hello Hello World% 3.3.22. Using Java beans and processors There is basic support for including regular Java source files together with Camel routes, and let the Camel JBang runtime compile the Java source. This means you can include smaller utility classes, POJOs, Camel Processors that the application needs. Note The Java source files cannot use package names. 3.3.23. Dependency Injection in Java classes When running the Camel integrations with camel-jbang , the runtime is camel-main based. This means there is no Spring Boot, or Quarkus available. However, there is a support for using annotation based dependency injection in Java classes. 3.3.23.1. Using Spring Boot dependency injection You can use the following Spring Boot annotations: @org.springframework.stereotype.Component or @org.springframework.stereotype.Service on class level to create an instance of the class and register in the Registry . @org.springframework.beans.factory.annotation.Autowired to dependency inject a bean on a class field. @org.springframework.beans.factory.annotation.Qualifier can be used to specify the bean id. @org.springframework.beans.factory.annotation.Value to inject a property placeholder . Such as a property defined in application.properties . @org.springframework.context.annotation.Bean on a method to create a bean by invoking the method. 3.3.24. Debugging There are two kinds of debugging available: Java debugging - Java code debugging (Standard Java) Camel route debugging - Debugging Camel routes (requires Camel tooling plugins) 3.3.24.1. Java debugging You can debug your integration scripts by using the --debug flag provided by JBang. However, to enable the Java debugging when starting the JVM, use the jbang command, instead of camel as shown: jbang --debug camel@apache/camel run hello.yaml Listening for transport dt_socket at address: 4004 As you can see the default listening port is 4004 but can be configured as described in JBang debugging . This is a standard Java debug socket. You can then use the IDE of your choice. You can add a Processor to put breakpoints hit during route execution (as opposed to route definition creation). 3.3.24.2. Camel route debugging The Camel route debugger is available by default (the camel-debug component is automatically added to the classpath). By default, it can be reached through JMX at the URL service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi/camel . You can then use the Integrated Development Environment (IDE) of your choice. 3.3.25. Health Checks The status of health checks is accessed using the Camel JBang from the CLI as follows: camel get health PID NAME AGE ID RL STATE RATE SINCE MESSAGE 61005 mybind 8s camel/context R UP 2/2/- 1s/3s/- Here you can see the Camel is UP . The application has been running for 8 seconds, and there are two health checks invoked. The output shows the default level of checks as: CamelContext health check Component specific health checks (such as from camel-kafka or camel-aws ) Custom health checks Any check which are not UP The RATE column shows three numbers separated by / . So 2/2/- means 2 checks in total, 2 successful and no failures. The two last columns will reset when a health check changes state as this number is the number of consecutive checks that was successful or failure. So if the health check starts to fail then the numbers could be: camel get health PID NAME AGE ID RL STATE RATE SINCE MESSAGE 61005 mybind 3m2s camel/context R UP 77/-/3 1s/-/17s some kind of error Here you can see the numbers is changed to 77/-/3 . This means the total number of checks is 77. There is no success, but the check has been failing 3 times in a row. The SINCE column corresponds to the RATE . So in this case you can see the last check was 1 second ago, and that the check has been failing for 17 second in a row. You can use --level=full to output every health checks that will include consumer and route level checks as well. A health check may often be failed due to an exception was thrown which can be shown using --trace flag: camel get health --trace PID NAME AGE ID RL STATE RATE SINCE MESSAGE 61038 mykafka 6m19s camel/context R UP 187/187/- 1s/6m16s/- 61038 mykafka 6m19s camel/kafka-consumer-kafka-not-secure... R DOWN 187/-/187 1s/-/6m16s KafkaConsumer is not ready - Error: Invalid url in bootstrap.servers: value ------------------------------------------------------------------------------------------------------------------------ STACK-TRACE ------------------------------------------------------------------------------------------------------------------------ PID: 61038 NAME: mykafka AGE: 6m19s CHECK-ID: camel/kafka-consumer-kafka-not-secured-source-1 STATE: DOWN RATE: 187 SINCE: 6m16s METADATA: bootstrap.servers = value group.id = 7d8117be-41b4-4c81-b4df-cf26b928d38a route.id = kafka-not-secured-source-1 topic = value MESSAGE: KafkaConsumer is not ready - Error: Invalid url in bootstrap.servers: value org.apache.kafka.common.KafkaException: Failed to construct kafka consumer at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:823) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:664) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:645) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:625) at org.apache.camel.component.kafka.DefaultKafkaClientFactory.getConsumer(DefaultKafkaClientFactory.java:34) at org.apache.camel.component.kafka.KafkaFetchRecords.createConsumer(KafkaFetchRecords.java:241) at org.apache.camel.component.kafka.KafkaFetchRecords.createConsumerTask(KafkaFetchRecords.java:201) at org.apache.camel.support.task.ForegroundTask.run(ForegroundTask.java:123) at org.apache.camel.component.kafka.KafkaFetchRecords.run(KafkaFetchRecords.java:125) at java.base/java.util.concurrent.ExecutorsUSDRunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutorUSDWorker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.kafka.common.config.ConfigException: Invalid url in bootstrap.servers: value at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:59) at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:730) ... 13 more Here you can see that the health check fails because of the org.apache.kafka.common.config.ConfigException which is due to invalid configuration: Invalid url in bootstrap.servers: value . Note Use camel get health --help to see all the various options. 3.4. Listing what Camel components is available Camel comes with a lot of artifacts out of the box which are: components data formats expression languages miscellaneous components kamelets You can use the Camel CLI to list what Camel provides using the camel catalog command. For example, to list all the components: camel catalog components To see which Kamelets are available: camel catalog kamelets Note Use camel catalog --help to see all possible commands. 3.4.1. Displaying component documentation The doc goal can show quick documentation for every component, dataformat, and kamelets. For example, to see the kafka component run: camel doc kafka Note The documentation is not the full documentation as shown on the website, as the Camel CLI does not have direct access to this information and can only show a basic description of the component, but include tables for every configuration option. To see the documentation for jackson dataformat: camel doc jackson In some rare cases then there may be a component and dataformat with the same name, and the doc goal prioritizes components. In such a situation you can prefix the name with dataformat, for example: camel doc dataformat:thrift You can also see the kamelet documentation such as shown: camel doc aws-kinesis-sink 3.4.1.1. Browsing online documentation from the Camel website You can use the doc command to quickly open the url in the web browser for the online documentation. For example to browse the kafka component, you use --open-url : camel doc kafka --open-url This also works for data formats, languages, kamelets. camel doc aws-kinesis-sink --open-url Note To just get the link to the online documentation, then use camel doc kafka --url . 3.4.1.2. Filtering options listed in the tables Some components may have many options, and in such cases you can use the --filter option to only list the options that match the filter either in the name, description, or the group (producer, security, advanced). For example, to list only security related options: camel doc kafka --filter=security To list only something about timeout : camel doc kafka --filter=timeout 3.5. Open API Camel JBang allows to quickly expose an Open API service using contract first approach, where you have an existing OpenAPI specification file. Camel JBang bridges each API endpoints from the OpenAPI specification to a Camel route with the naming convention direct:<operationId> . This make it quicker to implement a Camel route for a given operation. See the OpenAPI example for more details. 3.6. Gathering list of dependencies The dependencies are automatically resolved when you work with Camel JBang. This means that you do not have to use a build system like Maven or Gradle to add every Camel components as a dependency. However, you may want to know what dependencies are required to run the Camel integration. You can use the dependencies command to see the dependencies required. The command output does not output a detailed tree, such as mvn dependencies:tree , as the output is intended to list which Camel components, and other JARs needed (when using Kamelets). The dependency output by default is vanilla Apache Camel with the camel-main as runtime, as shown: camel dependencies org.apache.camel:camel-dsl-modeline:3.20.0 org.apache.camel:camel-health:3.20.0 org.apache.camel:camel-kamelet:3.20.0 org.apache.camel:camel-log:3.20.0 org.apache.camel:camel-rest:3.20.0 org.apache.camel:camel-stream:3.20.0 org.apache.camel:camel-timer:3.20.0 org.apache.camel:camel-yaml-dsl:3.20.0 org.apache.camel.kamelets:camel-kamelets-utils:0.9.3 org.apache.camel.kamelets:camel-kamelets:0.9.3 The output is by default a line per maven dependency in GAV format (groupId:artifactId:version). You can specify the Maven format for the the output as shown: camel dependencies --output=maven <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-main</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-dsl-modeline</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-health</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-kamelet</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-log</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rest</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-stream</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-timer</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-yaml-dsl</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel.kamelets</groupId> <artifactId>camel-kamelets-utils</artifactId> <version>0.9.3</version> </dependency> <dependency> <groupId>org.apache.camel.kamelets</groupId> <artifactId>camel-kamelets</artifactId> <version>0.9.3</version> </dependency> You can also choose the target runtime as either`quarkus` or spring-boot as shown: camel dependencies --runtime=spring-boot org.springframework.boot:spring-boot-starter-actuator:2.7.5 org.springframework.boot:spring-boot-starter-web:2.7.5 org.apache.camel.springboot:camel-spring-boot-engine-starter:3.20.0 org.apache.camel.springboot:camel-dsl-modeline-starter:3.20.0 org.apache.camel.springboot:camel-kamelet-starter:3.20.0 org.apache.camel.springboot:camel-log-starter:3.20.0 org.apache.camel.springboot:camel-rest-starter:3.20.0 org.apache.camel.springboot:camel-stream-starter:3.20.0 org.apache.camel.springboot:camel-timer-starter:3.20.0 org.apache.camel.springboot:camel-yaml-dsl-starter:3.20 org.apache.camel.kamelets:camel-kamelets-utils:0.9.3 org.apache.camel.kamelets:camel-kamelets:0.9.3 3.7. Creating Projects You can export your Camel JBang integration to a traditional Java based project such as Spring Boot or Quarkus. You may want to do this after you have built a prototype using Camel JBang, and are in the need of a traditional Java based project with more need for Java coding, or to use the powerful runtimes of Spring Boot, Quarkus or vanilla Camel Main. 3.7.1. Exporting to Camel Spring Boot The command export --runtime=spring-boot exports your current Camel JBang file(s) to a Maven based Spring Boot project with files organized in src/main/ folder structure. For example, to export to the Spring Boot using the Maven groupId com.foo and the artifactId acme and with version 1.0-SNAPSHOT , run: camel export --runtime=spring-boot --gav=com.foo:acme:1.0-SNAPSHOT Note This will export to the current directory, this means that files are moved into the needed folder structure. To export to another directory, run: camel export --runtime=spring-boot --gav=com.foo:acme:1.0-SNAPSHOT --directory=../myproject When exporting to the Spring Boot, the Camel version defined in the pom.xml or build.gradle is the same version as Camel JBang uses. However, you can specify the different Camel version as shown: camel export --runtime=spring-boot --gav=com.foo:acme:1.0-SNAPSHOT --directory=../myproject --camel-spring-boot-version=3.20.1.redhat-00109 Note See the possible options by running the camel export --help command for more details. 3.7.2. Exporting with Camel CLI included When exporting to Spring Boot, Quarkus or Camel Main, the Camel JBang CLI is not included out of the box. To continue to use the Camel CLI (that is camel ), you need to add camel:cli-connector in the --deps option, as shown: camel export --runtime=quarkus --gav=com.foo:acme:1.0-SNAPSHOT --deps=camel:cli-connector --directory=../myproject 3.7.3. Configuring exporting The export command by default loads the configuration from application.properties file which is used for exporting specific parameters such as selecting the runtime and java version. The following options related to exporting , can be configured in the application.properties file: Option Description camel.jbang.runtime Runtime (spring-boot, quarkus, or camel-main) camel.jbang.gav The Maven group:artifact:version camel.jbang.dependencies Additional dependencies (Use commas to separate multiple dependencies). See more details at Adding custom JARs . camel.jbang.classpathFiles Additional files to add to classpath (Use commas to separate multiple files). See more details at Adding custom JARs . camel.jbang.javaVersion Java version (11 or 17) camel.jbang.kameletsVersion Apache Camel Kamelets version camel.jbang.localKameletDir Local directory for loading Kamelets camel.jbang.camelSpringBootVersion Camel version to use with Spring Boot camel.jbang.springBootVersion Spring Boot version camel.jbang.quarkusGroupId Quarkus Platform Maven groupId camel.jbang.quarkusArtifactId Quarkus Platform Maven artifactId camel.jbang.quarkusVersion Quarkus Platform version camel.jbang.mavenWrapper Include Maven Wrapper files in exported project camel.jbang.gradleWrapper Include Gradle Wrapper files in exported project camel.jbang.buildTool Build tool to use (maven or gradle) camel.jbang.repos Additional maven repositories for download on-demand (Use commas to separate multiple repositories) camel.jbang.mavenSettings Optional location of maven setting.xml file to configure servers, repositories, mirrors and proxies. If set to false, not even the default ~/.m2/settings.xml will be used. camel.jbang.mavenSettingsSecurity Optional location of maven settings-security.xml file to decrypt settings.xml camel.jbang.exportDir Directory where the project will be exported. camel.jbang.platform-http.port HTTP server port to use when running standalone Camel, such as when --console is enabled (port 8080 by default). camel.jbang.console Developer console at /q/dev on local HTTP server (port 8080 by default) when running standalone Camel. camel.jbang.health Health check at /q/health on local HTTP server (port 8080 by default) when running standalone Camel. Note These are the options from the export command. You can see more details and default values using camel export --help . 3.8. Troubleshooting When you use JBang, it stores the state in ~/.jbang directory. This is also the location where JBang stores downloaded JARs. Camel JBang also downloads the needed dependencies while running. However, these dependencies are downloaded to your local Maven repository ~/.m2 . So when you troubleshoot the problems such as an outdated JAR while running the Camel JBang, try to delete these directories, or parts of it.
[ "jbang version", "jbang app install camel@apache/camel", "camel --help", "source <(camel completion)", "echo 'source <(camel completion)' >> ~/.bashrc", "camel init cheese.xml", "camel run cheese.xml", "camel init foo.java", "camel init foo.java --directory=foobar", "camel run one.yaml two.yaml", "camel run one.yaml hello.java", "camel run *.yaml", "camel run foo*", "camel run *", "camel run --code='from(\"kamelet:beer-source\").to(\"log:beer\")'", "camel run foo.yaml --dev", "camel run hello.java --dev", "camel run hello.java --console", "curl -s -H \"Accept: application/json\" http://0.0.0.0:8080/q/dev/top/", "curl -s -H \"Accept: application/json\" http://0.0.0.0:8080/q/dev/top/ | jq", "camel run hello.java --profile=local", "camel.main.streamCaching=false camel.main.logMask=true", "camel.component.kafka.brokers=broker1:9092,broker2:9092,broker3:9092", "camel run foo.java --download=false", "camel run foo.java --deps=com.foo:acme:1.0", "To add a Camel dependency explicitly you can use a shorthand syntax (starting with `camel:` or `camel-`):", "camel run foo.java --deps=camel-saxon", "camel run foo.java --deps=camel-saxon,com.foo:acme:1.0", "camel run foo.java --repos=https://packages.atlassian.com/maven-external", "camel.jbang.repos=https://packages.atlassian.com/maven-external", "camel run foo.java", "camel run foo.java application.properties", "camel run foo.java --profile=application", "<server> <id>external-repository</id> <username>camel</username> <password>{SSVqy/PexxQHvubrWhdguYuG7HnTvHlaNr6g3dJn7nk=}</password> </server>", "mvn -emp Master password: camel {hqXUuec2RowH8dA8vdqkF6jn4NU9ybOsDjuTmWvYj4U=}", "<settingsSecurity> <master>{hqXUuec2RowH8dA8vdqkF6jn4NU9ybOsDjuTmWvYj4U=}</master> </settingsSecurity>", "mvn -ep Password: camel {SSVqy/PexxQHvubrWhdguYuG7HnTvHlaNr6g3dJn7nk=}", "camel run foo.java --maven-settings=/path/to/settings.xml --maven-settings-security=/path/to/settings-security.xml", "camel run foo.java --maven-settings=false", "camel run github:apache:camel-kamelets-examples:jbang/hello-java/Hey.java", "camel run https://github.com/apache/camel-kamelets-examples/tree/main/jbang/hello-java", "camel run https://github.com/apache/camel-kamelets-examples/tree/main/jbang/languages/*.groovy", "camel run https://github.com/apache/camel-kamelets-examples/tree/main/jbang/languages/rou*", "camel run https://gist.github.com/davsclaus/477ddff5cdeb1ae03619aa544ce47e92", "camel init https://github.com/apache/camel-kamelets-examples/tree/main/jbang/dependency-injection", "camel run *", "camel init https://github.com/apache/camel-kamelets-examples/tree/main/jbang/dependency-injection --directory=myproject", "camel run * --dev", "camel init https://github.com/apache/camel-k-examples/blob/main/generic-examples/languages/simple.groovy", "camel run simple.groovy", "camel init https://gist.github.com/davsclaus/477ddff5cdeb1ae03619aa544ce47e92", "camel run *", "camel init https://gist.github.com/davsclaus/477ddff5cdeb1ae03619aa544ce47e92 --directory=foobar", "jbang run -Dcamel.jbang.version=3.20.1 camel@apache/camel [command]", "jbang run --fresh -Dcamel.jbang.version=3.20.1-SNAPSHOT camel@apache/camel [command]", "#!/usr/bin/env jbang camel@apache/camel run apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: joke spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: chuck-norris-source properties: period: 2000 sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: log-sink properties: show-headers: false", "camel run joke.yaml", "camel run clipboard.<extension>", "<route> <from uri=\"timer:foo\"/> <log message=\"Hello World\"/> </route>", "camel run clipboard.xml", "camel ps PID NAME READY STATUS AGE 61818 sample.camel.MyCamelApplica... 1/1 Running 26m38s 62506 test1 1/1 Running 4m34s", "camel stop test1 Stopping running Camel integration (pid: 62506)", "camel stop 62506 Stopping running Camel integration (pid: 62506)", "camel stop --all Stopping running Camel integration (pid: 61818) Stopping running Camel integration (pid: 62506)", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cli-connector-starter</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-cli-connector</artifactId> </dependency>", "camel get PID NAME CAMEL PLATFORM READY STATUS AGE TOTAL FAILED INFLIGHT SINCE-LAST 61818 MyCamel 3.20.1-SNAPSHOT Spring Boot v2.7.3 1/1 Running 28m34s 854 0 0 0s/0s/- 63051 test1 3.20.1-SNAPSHOT JBang 1/1 Running 18s 14 0 0 0s/0s/- 63068 mygroovy 3.20.1-SNAPSHOT JBang 1/1 Running 5s 2 0 0 0s/0s/-", "camel get route PID NAME ID FROM STATUS AGE TOTAL FAILED INFLIGHT MEAN MIN MAX SINCE-LAST 61818 MyCamel hello timer://hello?period=2000 Running 29m2s 870 0 0 0 0 14 0s/0s/- 63051 test1 java timer://java?period=1000 Running 46s 46 0 0 0 0 9 0s/0s/- 63068 mygroovy groovy timer://groovy?period=1000 Running 34s 34 0 0 0 0 5 0s/0s/-", "camel top PID NAME JAVA CAMEL PLATFORM STATUS AGE HEAP NON-HEAP GC THREADS CLASSES 22104 chuck 11.0.13 3.20.1-SNAPSHOT JBang Running 2m10s 131/322/4294 MB 70/73 MB 17ms (6) 7/8 7456/7456 14242 MyCamel 11.0.13 3.20.1-SNAPSHOT Spring Boot v2.7.3 Running 33m40s 115/332/4294 MB 62/66 MB 37ms (6) 16/16 8428/8428 22116 bar 11.0.13 3.20.1-SNAPSHOT JBang Running 2m7s 33/268/4294 MB 54/58 MB 20ms (4) 7/8 6104/6104", "camel top route PID NAME ID FROM STATUS AGE TOTAL FAILED INFLIGHT MEAN MIN MAX SINCE-LAST 22104 chuck chuck-norris-source-1 timer://chuck?period=10000 Started 10s 1 0 0 163 163 163 9s 22116 bar route1 timer://yaml2?period=1000 Started 7s 7 0 0 1 0 11 0s 22104 chuck chuck kamelet://chuck-norris-source Started 10s 1 0 0 0 0 0 9s 22104 chuck log-sink-2 kamelet://source?routeId=log-sink-2 Started 10s 1 0 0 0 0 0 9s 14242 MyCamel hello timer://hello?period=2000 Started 31m41s 948 0 0 0 0 4 0s", "camel cmd stop-route chuck", "camel get route PID NAME ID FROM STATUS AGE TOTAL FAILED INFLIGHT MEAN MIN MAX SINCE-LAST 81663 chuck chuck kamelet://chuck-norris-source Stopped 600 0 0 0 0 1 4s 81663 chuck chuck-norris-source-1 timer://chuck?period=10000 Stopped 600 0 0 65 52 290 4s 81663 chuck log-sink-2 kamelet://source?routeId=log-sink-2 Stopped 600 0 0 0 0 1 4s 83415 bar route1 timer://yaml2?period=1000 Started 5m30s 329 0 0 0 0 10 0s 83695 MyCamel hello timer://hello?period=2000 Started 3m52s 116 0 0 0 0 9 1s", "camel cmd start-route chuck", "camel cmd stop-route --all", "camel cmd start-route --all", "camel cmd logger PID NAME AGE LOGGER LEVEL 90857 bar 2m48s root INFO 91103 foo 20s root INFO", "camel cmd logger --level=DEBUG foo", "camel get service PID NAME COMPONENT PROTOCOL SERVICE 1912 netty netty tcp tcp:localhost:4444 2023 greetings platform-http rest http://0.0.0.0:7777/camel/greetings/{name} (GET) 2023 greetings platform-http http http://0.0.0.0:7777/q/dev", "camel get circuit-breaker PID NAME COMPONENT ROUTE ID STATE PENDING SUCCESS FAIL REJECT 56033 mycb resilience4j route1 circuitBreaker1 HALF_OPEN 5 2 3 0", "camel ps PID NAME READY STATUS AGE 61818 sample.camel.MyCamelApplica... 1/1 Running 26m38s 62506 test1.java 1/1 Running 4m34s", "camel jolokia 62506 Started Jolokia for PID 62506 http://127.0.0.1:8778/jolokia/", "camel jolokia te Started Jolokia for PID 62506 http://127.0.0.1:8778/jolokia/", "camel hawtio", "camel jolokia 62506 --stop Stopped Jolokia for PID 62506", "camel hawtio test1", "///usr/bin/env jbang --quiet camel@apache/camel pipe \"USD0\" \"USD@\" ; exit USD? Will upper-case the input - from: uri: \"stream:in\" steps: - setBody: simple: \"USD{body.toUpperCase()}\" - to: \"stream:out\"", "chmod +x upper.yaml", "echo \"Hello\\nWorld\" | ./upper.yaml", "HELLO WORLD", "echo \"Hello\\nWorld\" | ./upper.yaml --logging=true", "camel run --local-kamelet-dir=/path/to/local/kamelets earthquake.yaml", "camel run --local-kamelet-dir=https://github.com/apache/camel-kamelets-examples/tree/main/custom-kamelets user.java", "- from: uri: \"platform-http:/hello\" steps: - set-body: constant: \"Hello World\"", "camel run server.yaml", "curl http://localhost:8080/hello Hello World%", "jbang --debug camel@apache/camel run hello.yaml Listening for transport dt_socket at address: 4004", "camel get health PID NAME AGE ID RL STATE RATE SINCE MESSAGE 61005 mybind 8s camel/context R UP 2/2/- 1s/3s/-", "camel get health PID NAME AGE ID RL STATE RATE SINCE MESSAGE 61005 mybind 3m2s camel/context R UP 77/-/3 1s/-/17s some kind of error", "camel get health --trace PID NAME AGE ID RL STATE RATE SINCE MESSAGE 61038 mykafka 6m19s camel/context R UP 187/187/- 1s/6m16s/- 61038 mykafka 6m19s camel/kafka-consumer-kafka-not-secure... R DOWN 187/-/187 1s/-/6m16s KafkaConsumer is not ready - Error: Invalid url in bootstrap.servers: value ------------------------------------------------------------------------------------------------------------------------ STACK-TRACE ------------------------------------------------------------------------------------------------------------------------ PID: 61038 NAME: mykafka AGE: 6m19s CHECK-ID: camel/kafka-consumer-kafka-not-secured-source-1 STATE: DOWN RATE: 187 SINCE: 6m16s METADATA: bootstrap.servers = value group.id = 7d8117be-41b4-4c81-b4df-cf26b928d38a route.id = kafka-not-secured-source-1 topic = value MESSAGE: KafkaConsumer is not ready - Error: Invalid url in bootstrap.servers: value org.apache.kafka.common.KafkaException: Failed to construct kafka consumer at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:823) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:664) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:645) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:625) at org.apache.camel.component.kafka.DefaultKafkaClientFactory.getConsumer(DefaultKafkaClientFactory.java:34) at org.apache.camel.component.kafka.KafkaFetchRecords.createConsumer(KafkaFetchRecords.java:241) at org.apache.camel.component.kafka.KafkaFetchRecords.createConsumerTask(KafkaFetchRecords.java:201) at org.apache.camel.support.task.ForegroundTask.run(ForegroundTask.java:123) at org.apache.camel.component.kafka.KafkaFetchRecords.run(KafkaFetchRecords.java:125) at java.base/java.util.concurrent.ExecutorsUSDRunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutorUSDWorker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.kafka.common.config.ConfigException: Invalid url in bootstrap.servers: value at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:59) at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:730) ... 13 more", "camel catalog components", "camel catalog kamelets", "camel doc kafka", "camel doc jackson", "camel doc dataformat:thrift", "camel doc aws-kinesis-sink", "camel doc kafka --open-url", "camel doc aws-kinesis-sink --open-url", "camel doc kafka --filter=security", "camel doc kafka --filter=timeout", "camel dependencies org.apache.camel:camel-dsl-modeline:3.20.0 org.apache.camel:camel-health:3.20.0 org.apache.camel:camel-kamelet:3.20.0 org.apache.camel:camel-log:3.20.0 org.apache.camel:camel-rest:3.20.0 org.apache.camel:camel-stream:3.20.0 org.apache.camel:camel-timer:3.20.0 org.apache.camel:camel-yaml-dsl:3.20.0 org.apache.camel.kamelets:camel-kamelets-utils:0.9.3 org.apache.camel.kamelets:camel-kamelets:0.9.3", "camel dependencies --output=maven <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-main</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-dsl-modeline</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-health</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-kamelet</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-log</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rest</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-stream</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-timer</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-yaml-dsl</artifactId> <version>3.20.0</version> </dependency> <dependency> <groupId>org.apache.camel.kamelets</groupId> <artifactId>camel-kamelets-utils</artifactId> <version>0.9.3</version> </dependency> <dependency> <groupId>org.apache.camel.kamelets</groupId> <artifactId>camel-kamelets</artifactId> <version>0.9.3</version> </dependency>", "camel dependencies --runtime=spring-boot org.springframework.boot:spring-boot-starter-actuator:2.7.5 org.springframework.boot:spring-boot-starter-web:2.7.5 org.apache.camel.springboot:camel-spring-boot-engine-starter:3.20.0 org.apache.camel.springboot:camel-dsl-modeline-starter:3.20.0 org.apache.camel.springboot:camel-kamelet-starter:3.20.0 org.apache.camel.springboot:camel-log-starter:3.20.0 org.apache.camel.springboot:camel-rest-starter:3.20.0 org.apache.camel.springboot:camel-stream-starter:3.20.0 org.apache.camel.springboot:camel-timer-starter:3.20.0 org.apache.camel.springboot:camel-yaml-dsl-starter:3.20 org.apache.camel.kamelets:camel-kamelets-utils:0.9.3 org.apache.camel.kamelets:camel-kamelets:0.9.3", "camel export --runtime=spring-boot --gav=com.foo:acme:1.0-SNAPSHOT", "camel export --runtime=spring-boot --gav=com.foo:acme:1.0-SNAPSHOT --directory=../myproject", "camel export --runtime=spring-boot --gav=com.foo:acme:1.0-SNAPSHOT --directory=../myproject --camel-spring-boot-version=3.20.1.redhat-00109", "camel export --runtime=quarkus --gav=com.foo:acme:1.0-SNAPSHOT --deps=camel:cli-connector --directory=../myproject" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_user_guide/csb-using-camel-jbang
Chapter 8. Event [v1]
Chapter 8. Event [v1] Description Event is a report of an event somewhere in the cluster. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object Required metadata involvedObject 8.1. Specification Property Type Description action string What action was taken/failed regarding to the Regarding object. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources count integer The number of times this event has occurred. eventTime MicroTime Time when this Event was first observed. firstTimestamp Time The time at which the event was first recorded. (Time of server receipt is in TypeMeta.) involvedObject object ObjectReference contains enough information to let you inspect or modify the referred object. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds lastTimestamp Time The time at which the most recent occurrence of this event was recorded. message string A human-readable description of the status of this operation. metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata reason string This should be a short, machine understandable string that gives the reason for the transition into the object's current status. related object ObjectReference contains enough information to let you inspect or modify the referred object. reportingComponent string Name of the controller that emitted this Event, e.g. kubernetes.io/kubelet . reportingInstance string ID of the controller instance, e.g. kubelet-xyzf . series object EventSeries contain information on series of events, i.e. thing that was/is happening continuously for some time. source object EventSource contains information for an event. type string Type of this event (Normal, Warning), new types could be added in the future 8.1.1. .involvedObject Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.1.2. .related Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.1.3. .series Description EventSeries contain information on series of events, i.e. thing that was/is happening continuously for some time. Type object Property Type Description count integer Number of occurrences in this series up to the last heartbeat time lastObservedTime MicroTime Time of the last occurrence observed 8.1.4. .source Description EventSource contains information for an event. Type object Property Type Description component string Component from which the event is generated. host string Node name on which the event is generated. 8.2. API endpoints The following API endpoints are available: /api/v1/events GET : list or watch objects of kind Event /api/v1/watch/events GET : watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/events DELETE : delete collection of Event GET : list or watch objects of kind Event POST : create an Event /api/v1/watch/namespaces/{namespace}/events GET : watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/events/{name} DELETE : delete an Event GET : read the specified Event PATCH : partially update the specified Event PUT : replace the specified Event /api/v1/watch/namespaces/{namespace}/events/{name} GET : watch changes to an object of kind Event. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 8.2.1. /api/v1/events Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Event Table 8.2. HTTP responses HTTP code Reponse body 200 - OK EventList schema 401 - Unauthorized Empty 8.2.2. /api/v1/watch/events Table 8.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. Table 8.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.3. /api/v1/namespaces/{namespace}/events Table 8.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Event Table 8.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 8.8. Body parameters Parameter Type Description body DeleteOptions schema Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Event Table 8.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK EventList schema 401 - Unauthorized Empty HTTP method POST Description create an Event Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body Event schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK Event schema 201 - Created Event schema 202 - Accepted Event schema 401 - Unauthorized Empty 8.2.4. /api/v1/watch/namespaces/{namespace}/events Table 8.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Event. deprecated: use the 'watch' parameter with a list operation instead. Table 8.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.5. /api/v1/namespaces/{namespace}/events/{name} Table 8.18. Global path parameters Parameter Type Description name string name of the Event namespace string object name and auth scope, such as for teams and projects Table 8.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Event Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.21. Body parameters Parameter Type Description body DeleteOptions schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Event Table 8.23. HTTP responses HTTP code Reponse body 200 - OK Event schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Event Table 8.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.25. Body parameters Parameter Type Description body Patch schema Table 8.26. HTTP responses HTTP code Reponse body 200 - OK Event schema 201 - Created Event schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Event Table 8.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.28. Body parameters Parameter Type Description body Event schema Table 8.29. HTTP responses HTTP code Reponse body 200 - OK Event schema 201 - Created Event schema 401 - Unauthorized Empty 8.2.6. /api/v1/watch/namespaces/{namespace}/events/{name} Table 8.30. Global path parameters Parameter Type Description name string name of the Event namespace string object name and auth scope, such as for teams and projects Table 8.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Event. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 8.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/metadata_apis/event-v1
Chapter 4. Creating the control plane
Chapter 4. Creating the control plane The Red Hat OpenStack Services on OpenShift (RHOSO) control plane contains the RHOSO services that manage the cloud. The RHOSO services run as a Red Hat OpenShift Container Platform (RHOCP) workload. Note Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell ( rsh ) to run OpenStack CLI commands. 4.1. Prerequisites The OpenStack Operator ( openstack-operator ) is installed. For more information, see Installing and preparing the Operators . The RHOCP cluster is prepared for RHOSO networks. For more information, see Preparing RHOCP for RHOSO networks . The RHOCP cluster is not configured with any network policies that prevent communication between the openstack-operators namespace and the control plane namespace (default openstack ). Use the following command to check the existing network policies on the cluster: This command returns the message "No resources found in openstack namespace" when there are no network policies. If this command returns a list of network policies, then check that they do not prevent communication between the openstack-operators namespace and the control plane namespace. For more information about network policies, see Network security in the RHOCP Networking guide. You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges. 4.2. Creating the control plane Define an OpenStackControlPlane custom resource (CR) to perform the following tasks: Create the control plane. Enable the Red Hat OpenStack Services on OpenShift (RHOSO) services. The following procedure creates an initial control plane with the recommended configurations for each service. The procedure helps you quickly create an operating control plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add service customizations to a deployed environment. For more information about how to customize your control plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide. For an example OpenStackControlPlane CR, see Example OpenStackControlPlane CR . Tip Use the following commands to view the OpenStackControlPlane CRD definition and specification schema: Procedure Create a file on your workstation named openstack_control_plane.yaml to define the OpenStackControlPlane CR: Specify the Secret CR you created to provide secure access to the RHOSO service pods in Providing secure access to the Red Hat OpenStack Services on OpenShift services : Specify the storageClass you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end: Replace <RHOCP_storage_class> with the storage class you created for your RHOCP cluster storage back end. For information about storage classes, see Creating a storage class . Add the following service configurations: Note The following service snippets use IP addresses from the default RHOSO MetalLB IPAddressPool range for the loadBalancerIPs field. Update the loadBalancerIPs field with the IP address from the MetalLB IPAddressPool range you created in step 12 of the Preparing RHOCP for RHOSO networks procedure. Block Storage service (cinder): 1 You can deploy the initial control plane without activating the cinderBackup service. To deploy the service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for each service and how to configure a back end for the Block Storage service and the backup service, see Configuring the Block Storage backup service in Configuring persistent storage . 2 You can deploy the initial control plane without activating the cinderVolumes service. To deploy the service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for the cinderVolumes service and how to configure a back end for the service, see Configuring the volume service in Configuring persistent storage . Compute service (nova): Note A full set of Compute services (nova) are deployed by default for each of the default cells, cell0 and cell1 : nova-api , nova-metadata , nova-scheduler , and nova-conductor . The novncproxy service is also enabled for cell1 by default. DNS service for the data plane: 1 Defines the dnsmasq instances required for each DNS server by using key-value pairs. In this example, there are two key-value pairs defined because there are two DNS servers configured to forward requests to. 2 Specifies the dnsmasq parameter to customize for the deployed dnsmasq instance. Set to one of the following valid values: server rev-server srv-host txt-record ptr-record rebind-domain-ok naptr-record cname host-record caa-record dns-rr auth-zone synth-domain no-negcache local 3 Specifies the values for the dnsmasq parameter. You can specify a generic DNS server as the value, for example, 1.1.1.1 , or a DNS server for a specific domain, for example, /google.com/8.8.8.8 . Identity service (keystone) Image service (glance): 1 You can deploy the initial control plane without activating the Image service (glance). To deploy the Image service, you must set the number of replicas for the service and configure the back end for the service. For information about the recommended replicas for the Image service and how to configure a back end for the service, see Configuring the Image service (glance) in Configuring persistent storage . If you do not deploy the Image service, you cannot upload images to the cloud or start an instance. Key Management service (barbican): Networking service (neutron): Object Storage service (swift): OVN: Placement service (placement): Telemetry service (ceilometer, prometheus): 1 You must have the autoscaling field present, even if autoscaling is disabled. Add the following service configurations to implement high availability (HA): A MariaDB Galera cluster for use by all RHOSO services ( openstack ), and a MariaDB Galera cluster for use by the Compute service for cell1 ( openstack-cell1 ): A single memcached cluster that contains three memcached servers: A RabbitMQ cluster for use by all RHOSO services ( rabbitmq ), and a RabbitMQ cluster for use by the Compute service for cell1 ( rabbitmq-cell1 ): Note Multiple RabbitMQ instances cannot share the same VIP as they use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses. Create the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Note Creating the control plane also creates an OpenStackClient pod that you can access through a remote shell ( rsh ) to run OpenStack CLI commands. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace: The control plane is deployed when all the pods are either completed or running. Verification Open a remote shell connection to the OpenStackClient pod: Confirm that the internal service endpoints are registered with each service: Exit the OpenStackClient pod: 4.3. Example OpenStackControlPlane CR The following example OpenStackControlPlane CR is a complete control plane configuration that includes all the key services that must always be enabled for a successful deployment. 1 The storage class that you created for your Red Hat OpenShift Container Platform (RHOCP) cluster storage back end. 2 Service-specific parameters for the Block Storage service (cinder). 3 The Block Storage service back end. For more information on configuring storage services, see the Configuring persistent storage guide. 4 The Block Storage service configuration. For more information on configuring storage services, see the Configuring persistent storage guide. 5 The list of networks that each service pod is directly attached to, specified by using the NetworkAttachmentDefinition resource names. A NIC is configured for the service for each specified network attachment. Note If you do not configure the isolated networks that each service pod is attached to, then the default pod network is used. For example, the Block Storage service uses the storage network to connect to a storage back end; the Identity service (keystone) uses an LDAP or Active Directory (AD) network; the ovnDBCluster service uses the internalapi network; and the ovnController service uses the tenant network. 6 Service-specific parameters for the Compute service (nova). 7 Service API route definition. You can customize the service route by using route-specific annotations. For more information, see Route-specific annotations in the RHOCP Networking guide. Set route: to {} to apply the default route template. 8 The internal service API endpoint registered as a MetalLB service with the IPAddressPool internalapi . 9 The virtual IP (VIP) address for the service. The IP is shared with other services by default. 10 The RabbitMQ instances exposed to an isolated network with distinct IP addresses defined in the loadBalancerIPs annotation, as indicated in 11 and 12 . Note Multiple RabbitMQ instances cannot share the same VIP as they use the same port. If you need to expose multiple RabbitMQ instances to the same network, then you must use distinct IP addresses. 11 The distinct IP address for a RabbitMQ instance that is exposed to an isolated network. 12 The distinct IP address for a RabbitMQ instance that is exposed to an isolated network. 4.4. Removing a service from the control plane You can completely remove a service and the service database from the control plane after deployment by disabling the service. Many services are enabled by default, which means that the OpenStack Operator creates resources such as the service database and Identity service (keystone) users, even if no service pod is created because replicas is set to 0 . Warning Remove a service with caution. Removing a service is not the same as stopping service pods. Removing a service is irreversible. Disabling a service removes the service database and any resources that referenced the service are no longer tracked. Create a backup of the service database before removing a service. Procedure Open the OpenStackControlPlane CR file on your workstation. Locate the service you want to remove from the control plane and disable it: Update the control plane: Wait until RHOCP removes the resource related to the disabled service. Run the following command to check the status: The OpenStackControlPlane resource is updated with the disabled service when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Optional: Confirm that the pods from the disabled service are no longer listed by reviewing the pods in the openstack namespace: Check that the service is removed: This command returns the following message when the service is successfully removed: Check that the API endpoints for the service are removed from the Identity service (keystone): This command returns the following message when the API endpoints for the service are successfully removed: 4.5. Additional resources Kubernetes NMState Operator The Kubernetes NMState project Load balancing with MetalLB MetalLB documentation MetalLB in layer 2 mode Specify network interfaces that LB IP can be announced from Multiple networks Using the Multus CNI in OpenShift macvlan plugin whereabouts IPAM CNI plugin - Extended configuration About advertising for the IP address pools Dynamic provisioning Configuring the Block Storage backup service in Configuring persistent storage . Configuring the Image service (glance) in Configuring persistent storage .
[ "oc get networkpolicy -n openstack", "oc describe crd openstackcontrolplane oc explain openstackcontrolplane.spec", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: secret: osp-secret", "spec: secret: osp-secret storageClass: <RHOCP_storage_class>", "cinder: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 0 1 cinderVolumes: volume1: networkAttachments: - storage replicas: 0 2", "nova: apiOverride: route: {} template: apiServiceTemplate: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer metadataServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: replicas: 3 cellTemplates: cell0: cellDatabaseAccount: nova-cell0 cellDatabaseInstance: openstack cellMessageBusInstance: rabbitmq hasAPIAccess: true cell1: cellDatabaseAccount: nova-cell1 cellDatabaseInstance: openstack-cell1 cellMessageBusInstance: rabbitmq-cell1 noVNCProxyServiceTemplate: enabled: true networkAttachments: - ctlplane hasAPIAccess: true secret: osp-secret", "dns: template: options: 1 - key: server 2 values: 3 - 192.168.122.1 - key: server values: - 192.168.122.2 override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 2", "keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret replicas: 3", "glance: apiOverrides: default: route: {} template: databaseInstance: openstack storage: storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: replicas: 0 # Configure back end; set to 3 when deploying service 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage", "barbican: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 3 barbicanKeystoneListener: replicas: 1", "neutron: apiOverride: route: {} template: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapi", "swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 2 secret: osp-secret swiftRing: ringReplicas: 3 swiftStorage: networkAttachments: - storage replicas: 3 storageRequest: 10Gi", "ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replcas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: {}", "placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack replicas: 3 secret: osp-secret", "telemetry: enabled: true template: metricStorage: enabled: true dashboardsEnabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: 1 enabled: false aodh: databaseAccount: aodh databaseInstance: openstack passwordSelector: aodhService: AodhPassword rabbitMqClusterName: rabbitmq serviceUser: aodh secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false ipaddr: 172.17.0.80", "galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3", "memcached: templates: memcached: replicas: 3", "rabbitmq: templates: rabbitmq: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancer", "oc create -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started", "oc rsh -n openstack openstackclient", "oc get pods -n openstack", "oc rsh -n openstack openstackclient", "openstack endpoint list -c 'Service Name' -c Interface -c URL --service glance +--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | glance | internal | https://glance-internal.openstack.svc | | glance | public | https://glance-default-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------------------------------------------------+", "exit", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: secret: osp-secret storageClass: your-RHOCP-storage-class 1 cinder: 2 apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderScheduler: replicas: 1 cinderBackup: 3 networkAttachments: - storage replicas: 0 # backend needs to be configured to activate the service cinderVolumes: 4 volume1: networkAttachments: 5 - storage replicas: 0 # backend needs to be configured to activate the service nova: 6 apiOverride: 7 route: {} template: apiServiceTemplate: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi 8 metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 9 spec: type: LoadBalancer metadataServiceTemplate: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer schedulerServiceTemplate: replicas: 3 cellTemplates: cell0: cellDatabaseAccount: nova-cell0 cellDatabaseInstance: openstack cellMessageBusInstance: rabbitmq hasAPIAccess: true cell1: cellDatabaseAccount: nova-cell1 cellDatabaseInstance: openstack-cell1 cellMessageBusInstance: rabbitmq-cell1 noVNCProxyServiceTemplate: enabled: true networkAttachments: - ctlplane hasAPIAccess: true secret: osp-secret dns: template: options: - key: server values: - 192.168.122.1 - key: server values: - 192.168.122.2 override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 2 galera: templates: openstack: storageRequest: 5000M secret: osp-secret replicas: 3 openstack-cell1: storageRequest: 5000M secret: osp-secret replicas: 3 keystone: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret replicas: 3 glance: apiOverrides: default: route: {} template: databaseInstance: openstack storage: storageRequest: 10G secret: osp-secret keystoneEndpoint: default glanceAPIs: default: replicas: 0 # Configure back end; set to 3 when deploying service override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage barbican: apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret barbicanAPI: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 3 barbicanKeystoneListener: replicas: 1 memcached: templates: memcached: replicas: 3 neutron: apiOverride: route: {} template: replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret networkAttachments: - internalapi swift: enabled: true proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 swiftRing: ringReplicas: 1 swiftStorage: networkAttachments: - storage replicas: 1 storageRequest: 10Gi ovn: template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: {} placement: apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack replicas: 3 secret: osp-secret rabbitmq: 10 templates: rabbitmq: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 11 spec: type: LoadBalancer rabbitmq-cell1: replicas: 3 override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 12 spec: type: LoadBalancer telemetry: enabled: true template: metricStorage: enabled: true dashboardsEnabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G autoscaling: enabled: false aodh: databaseAccount: aodh databaseInstance: openstack passwordSelector: aodhService: AodhPassword rabbitMqClusterName: rabbitmq serviceUser: aodh secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false ipaddr: 172.17.0.80", "cinder: enabled: false apiOverride: route: {}", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started", "oc get pods -n openstack", "oc get cinder -n openstack", "No resources found in openstack namespace.", "oc rsh -n openstack openstackclient openstack endpoint list --service volumev3", "No service with a type, name or ID of 'volumev3' exists." ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_red_hat_openstack_services_on_openshift/assembly_creating-the-control-plane
Chapter 2. Deploy OpenShift Data Foundation using local storage devices
Chapter 2. Deploy OpenShift Data Foundation using local storage devices You can deploy OpenShift Data Foundation on any platform including virtualized and cloud environments where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create an OpenShift Data Foundation cluster on any platform . 2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. Each node should include one disk and requires 3 disks (PVs). However, one PV remains eventually unused by default. This is an expected behavior. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.14 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.3. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.4. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.5. Creating Multus networks OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. You can configure your default pod network during cluster installation. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition (NAD) custom resource (CR). A CNI configuration inside each of the NetworkAttachmentDefinition defines how that interface is created. OpenShift Data Foundation uses the CNI plug-in called macvlan. Creating a macvlan-based additional network allows pods on a host to communicate with other hosts and pods on those hosts using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. 2.5.1. Creating network attachment definitions To utilize Multus, an already working cluster with the correct networking configuration is required, see Requirements for Multus configuration . The newly created NetworkAttachmentDefinition (NAD) can be selected during the Storage Cluster installation. This is the reason they must be created before the Storage Cluster. Note Network attachment definitions can only use the whereabouts IP address management (IPAM), and it must specify the range field. ipRanges and plugin chaining are not supported. You can select the newly created NetworkAttachmentDefinition (NAD) during the Storage Cluster installation. This is the reason you must create the NAD before you create the Storage Cluster. As detailed in the Planning Guide, the Multus networks you create depend on the number of available network interfaces you have for OpenShift Data Foundation traffic. It is possible to separate all of the storage traffic onto one of the two interfaces (one interface used for default OpenShift SDN) or to further segregate storage traffic into client storage traffic (public) and storage replication traffic (private or cluster). The following is an example NetworkAttachmentDefinition for all the storage traffic, public and cluster, on the same interface. It requires one additional interface on all schedulable nodes (OpenShift default SDN on separate network interface): Note All network interface names must be the same on all the nodes attached to the Multus network (that is, ens2 for ocs-public-cluster ). The following is an example NetworkAttachmentDefinition for storage traffic on separate Multus networks, public, for client storage traffic, and cluster, for replication traffic. It requires two additional interfaces on OpenShift nodes hosting object storage device (OSD) pods and one additional interface on all other schedulable nodes (OpenShift default SDN on separate network interface): Example NetworkAttachmentDefinition : Note All network interface names must be the same on all the nodes attached to the Multus networks (that is, ens2 for ocs-public , and ens3 for ocs-cluster ). 2.6. Creating OpenShift Data Foundation cluster on any platform Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. If you want to use multus networking, you must create network attachment definitions (NADs) before deployment which is later attached to the cluster. For more information, see Multi network plug-in (Multus) support and Creating network attachment definitions . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . The local volume set name appears as the default value for the storage class name. You can change the name. Select one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed if at least 24 CPUs and 72 GiB of RAM is available. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected as the default value. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of Persistent Volumes (PVs) that you can create on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Select one of the following: Default (SDN) If you are using a single network. Custom (Multus) If you are using multiple network interfaces. Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface and leave the Cluster Network Interface blank. Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System Click ocs-storagecluster-storagesystem Resources . Verify that the Status of the StorageCluster is Ready and has a green tick mark to it. To verify if the flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System Click ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in the spec section and failureDomain in the status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled: To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . To verify the multi networking (Multus), see Verifying the Multus networking . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide and follow the instructions in the "Scaling storage of bare metal OpenShift Data Foundation cluster" section. 2.7. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . Verify the Multus networking . 2.7.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, "Pods corresponding to OpenShift Data Foundation cluster" . Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Table 2.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.7.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.7.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.7.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: 2.7.5. Verifying the Multus networking To determine if Multus is working in your cluster, verify the Multus networking. Procedure Based on your Network configuration choices, the OpenShift Data Foundation operator will do one of the following: If only a single NetworkAttachmentDefinition (for example, ocs-public-cluster ) was selected for the Public Network Interface, then the traffic between the application pods and the OpenShift Data Foundation cluster will happen on this network. Additionally the cluster will be self configured to also use this network for the replication and rebalancing traffic between OSDs. If both NetworkAttachmentDefinitions (for example, ocs-public and ocs-cluster ) were selected for the Public Network Interface and the Cluster Network Interface respectively during the Storage Cluster installation, then client storage traffic will be on the public network and cluster network for the replication and rebalancing traffic between OSDs. To verify the network configuration is correct, complete the following: In the OpenShift console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for network in the spec section and ensure the configuration is correct for your network interface choices. This example is for separating the client storage traffic from the storage replication traffic. Sample output: To verify the network configuration is correct using the command line interface, run the following commands: Sample output: Confirm the OSD pods are using correct network In the openshift-storage namespace use one of the OSD pods to verify the pod has connectivity to the correct networks. This example is for separating the client storage traffic from the storage replication traffic. Note Only the OSD pods will connect to both Multus public and cluster networks if both are created. All other OCS pods will connect to the Multus public network. Sample output: To confirm the OSD pods are using correct network using the command line interface, run the following command (requires the jq utility): Sample output:
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens3\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.2.0/24\" } }'", "spec: flexibleScaling: true [...] status: failureDomain: host", "[..] spec: [..] network: ipFamily: IPv4 provider: multus selectors: cluster: openshift-storage/ocs-cluster public: openshift-storage/ocs-public [..]", "oc get storagecluster ocs-storagecluster -n openshift-storage -o=jsonpath='{.spec.network}{\"\\n\"}'", "{\"ipFamily\":\"IPv4\",\"provider\":\"multus\",\"selectors\":{\"cluster\":\"openshift-storage/ocs-cluster\",\"public\":\"openshift-storage/ocs-public\"}}", "oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}'", "[{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.30\" ], \"default\": true, \"dns\": {} },{ \"name\": \"openshift-storage/ocs-cluster\", \"interface\": \"net1\", \"ips\": [ \"192.168.2.1\" ], \"mac\": \"e2:04:c6:81:52:f1\", \"dns\": {} },{ \"name\": \"openshift-storage/ocs-public\", \"interface\": \"net2\", \"ips\": [ \"192.168.1.1\" ], \"mac\": \"ee:a0:b6:a4:07:94\", \"dns\": {} }]", "oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}' | jq -r '.[].name'", "openshift-sdn openshift-storage/ocs-cluster openshift-storage/ocs-public" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_on_any_platform/deploy-using-local-storage-devices-bm
2.6. Software Collection Scriptlets
2.6. Software Collection Scriptlets The Software Collection scriptlets are simple shell scripts that change the current system environment so that the group of packages in the Software Collection is preferred over the corresponding group of conventional packages installed on the system. To utilize the Software Collection scriptlets, use the scl tool that is part of the scl-utils package. For more information on scl , refer to Section 1.6, "Enabling a Software Collection" . A single Software Collection can include multiple Software Collection scriptlets. These scriptlets are located in the /opt/provider/software_collection/ directory in your Software Collection package. If you only need to distribute a single scriptlet in your Software Collection, it is highly recommended that you use enable as the name for that scriptlet. When the user runs a command in the Software Collection environment by executing scl enable software_collection command , the /opt/provider/software_collection/enable scriptlet is then used to update search paths, and so on. Note that Software Collection scriptlets can only set the system environment in a subshell that is created by running the scl enable command. The subshell is only active for the time the command is being performed.
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-software_collection_scriptlets
Chapter 1. Overview
Chapter 1. Overview AMQ .NET is a lightweight AMQP 1.0 library for the .NET platform. It enables you to write .NET applications that send and receive AMQP messages. AMQ .NET is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.8 Release Notes . AMQ .NET is based on AMQP.Net Lite . For detailed API documentation, see the AMQ .NET API reference . 1.1. Key features SSL/TLS for secure communication Flexible SASL authentication Seamless conversion between AMQP and native data types Access to all the features and capabilities of AMQP 1.0 An integrated development environment with full IntelliSense API documentation 1.2. Supported standards and protocols AMQ .NET supports the following industry-recognized standards and network protocols: Version 1.0 of the Advanced Message Queueing Protocol (AMQP) Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Simple Authentication and Security Layer (SASL) mechanisms ANONYMOUS, PLAIN, and EXTERNAL Modern TCP with IPv6 1.3. Supported configurations AMQ .NET supports the OS and language versions listed below. For more information, see Red Hat AMQ 7 Supported Configurations . Red Hat Enterprise Linux 7 and 8 with .NET Core 3.1 Microsoft Windows 10 Pro with .NET Core 3.1 or .NET Framework 4.7 Microsoft Windows Server 2012 R2 and 2016 with .NET Core 3.1 or .NET Framework 4.7 AMQ .NET is supported in combination with the following AMQ components and versions: All versions of AMQ Broker All versions of AMQ Interconnect A-MQ 6 versions 6.2.1 and newer 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description Connection A channel for communication between two peers on a network Session A context for sending and receiving messages Sender link A channel for sending messages to a target Receiver link A channel for receiving messages from a source Source A named point of origin for messages Target A named destination for messages Message A mutable holder of application data AMQ .NET sends and receives messages . Messages are transferred between connected peers over links . Links are established over sessions . Sessions are established over connections . A sending peer creates a sender link to send messages. The sender link has a target that identifies a queue or topic at the remote peer. A receiving client creates a receiver link to receive messages. The receiver link has a source that identifies a queue or topic at the remote peer. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir>
[ "cd <project-dir>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_.net_client/overview
Chapter 2. Disaster recovery subscription requirement
Chapter 2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/disaster-recovery-subscriptions_common
Chapter 98. KafkaTopic schema reference
Chapter 98. KafkaTopic schema reference Property Property type Description spec KafkaTopicSpec The specification of the topic. status KafkaTopicStatus The status of the topic.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaTopic-reference
Chapter 23. Authentication and Interoperability
Chapter 23. Authentication and Interoperability yum no longer reports package conflicts after installing ipa-client After the user installed the ipa-client package, the yum utility unexpectedly reported package conflicts between the ipa and freeipa packages. These errors occurred after failed transactions or after using the yum check command. With this update, yum no longer reports errors about self-conflicting packages because such conflicts are allowed by RPM. As a result, yum no longer displays the described errors after installing ipa-client . (BZ# 1370134 ) In FIPS mode, the slapd_pk11_getInternalKeySlot() function is now used to retrieve the key slot for a token The Red Hat Directory Server previously tried to retrieve the key slot from a fixed token name, when FIPS mode was enabled on the security database. However, the token name can change. If the key slot is not found, Directory Server is unable to decode the replication manager's password and replication sessions fail. To fix the problem, the slapd_pk11_getInternalKeySlot() function now uses FIPS mode to retrieve the current key slot. As a result, replication sessions using SSL or STTARTTLS no longer fail in the described situation. (BZ# 1378209 ) Certificate System no longer fails to install with a Thales HSM on systems in FIPS mode After installing with the Certificate System (CS) with a Thales hardware security module (HSM), the SSL protocol did not work correctly if you generated all system keys on the HSM. Consequently, CS failed to install on systems with FIPS mode enabled, requiring you to manually modify the sslRangeCiphers parameter in the server.xml file. This bug has been fixed, and installation FIPS-enabled systems with Thales HSM works as expected. (BZ#1382066) The dependency list for pkispawn now correctly includes openssl Previously, when the openssl package was not installed, using the pkispawn utility failed with the following error: This problem occured because the openssl package was not included as a runtime dependency of the pki-server package contained within the pki-core package. This bug has been fixed by adding the missing dependency, and pkispawn installations no longer fail due to missing openssl . (BZ#1376488) Error messages from the PKI Server profile framework are now passed through to the client Previously, PKI Server did not pass through certain error messages generated by the profile framework for certificate requests to the client. Consequently, the error messages displayed on the web UI or in the output of the pki command did not describe why a request failed. The code has been fixed and now passes through error messages. Now users can see the reason why an enrollment failed or was rejected. (BZ#1249400) Certificate System does not start a Lightweight CA key replication during installation Previously, Certificate System incorrectly started a Lightweight CA key replication during a two-step installation. As a consequence, the installation failed and an error was displayed. With this update, the two-step installation does not start the Lightweight CA key replication and the installation completes successfully. (BZ# 1378275 ) PKI Server now correctly compares subject DNs during startup Due to a bug in the routine that adds a Lightweight CA entry for the primary CA, PKI Server previously failed to compare subject distinguished names (DN) if it contained attributes using encodings other than UTF8String . As a consequence, every time the primary CA started, an additional Lightweight CA entry was added. PKI Server now compares the subject DNs in canonical form. As a result, PKI server no longer adds additional Lightweight CA entries in the mentioned scenario. (BZ# 1378277 ) KRA installation no longer fails when connecting to an intermediate CA with an incomplete certificate chain Previously, installing a Key Recovery Authority (KRA) subsystem failed with an UNKNOWN_ISSUER error if the KRA attempted to connect to an intermediate CA that had a trusted CA certificate but did not have the root CA certificate. With this update, KRA installation ignores the error and completes successfully. (BZ# 1381084 ) The startTime field in certificate profiles now uses long integer format Previously, Certificate System stored the value in the startTime field of a certificate profile as integer . If you entered a larger number, Certificate System interpreted the value as a negative number. Consequently, the certificate authority issued certificates that contained a start date located in the past. With this update, the input format of the startTime field has been changed to a long integer. As a result, the issued certificates now have a correct start date. (BZ#1385208) Subordinate CA installation no longer fails due with a PKCS#11 token is not logged in error Previously, subordinate Certificate Authority (sub-CA) installation failed due to a bug in the Network Security Services (NSS) library, which generated the SEC_ERROR_TOKEN_NOT_LOGGED_IN error. This update adds a workaround to the installer which allows the installation to proceed. If the error is still displayed, it can now be ignored. (BZ# 1395817 ) The pkispawn script now correctly sets the ECC key sizes Previously, when a user ran the pkispawn script with an Elliptic Curve Cryptography (ECC) key size parameter set to a different value than the default, which is nistp256 , the setting was ignored. Consequently, the created PKI Server instance issued system certificates, which incorrectly used the default ECC key curve. With this update, PKI Server uses the value set in the pkispawn configuration for the ECC key curve name. As a result, the PKI Server instance now uses the ECC key size set when setting up the instance. (BZ#1397200) CA clone installation in FIPS mode no longer fails Previously, installing a CA clone or a Key Recovery Authority (KRA) failed in FIPS mode due to an inconsistency in handling internal NSS token names. With this update, the code that handles the token name has been consolidated to ensure that all token names are handled consistently. T allows the KRA and CA clone installation to complete properly in FIPS mode. (BZ# 1411428 ) PKI Server no longer fails to start when an entryUSN attribute contains a value larger than 32-bit Previously, the *LDAP Profile Monitor" and the Lightweight CA Monitor parsed values in entryUSN attributes as a 32-bit integer. As a consequence, when the attribute contained a value larger than that, a NumberFormatException error was logged and the server failed to start. The problem has been fixed, and the server no longer fails to start in the mentioned scenario. (BZ# 1412681 ) Tomcat now works with IPv6 by default The IPv4 -specific 127.0.0.1 loopback address was previously used in the default server configuration file as the default AJP host name. This caused connections to fail on servers which run in IPv6 -only environments. With this update, the default value is changed to localhost , which works with both IPv4 and IPv6 protocols. Additionally, an upgrade script is available to automatically change the AJP host name on existing server instances. (BZ# 1413136 ) pkispawn no longer generates invalid NSS database passwords Prior to this update, pkispawn generated a random password for the NSS database which in some cases contained a backslash ( \ ) character. This caused problems when NSS established SSL connections, which in turn caused the installation to fail with a ACCESS_SESSION_ESTABLISH_FAILURE error. This update ensures that the randomly generated password can not contain the backslash character and a connection can always be established, allowing the installation to finish successfully. (BZ# 1447762 ) Certificate retrieval no longer fails when adding a user certificate with the --serial option Using the pki user-cert-add command with the --serial parameter previously used an improperly set up SSL connection to the certificate authority (CA), causing certificate retrieval to fail. With this update, the command uses a properly configured SSL connection to the CA, and the operation now completes successfully. (BZ#1246635) CA web interface no longer shows a blank certificate request page if there is only one entry Previously, when the certificate request page in the CA web user interface only contained one entry, it displayed an empty page instead of showing the single entry. This update fixes the web user interface, and the certificate request page now correctly shows entries in all circumstances. (BZ# 1372052 ) Installing PKI Server in a container environment no longer displays a warning Previously, when installing the pki-server RPM package in a container environment, the systemd daemon was reloaded. As a consequence, a warning was displayed. A patch has been applied to reload the daemon only during an RPM upgrade. As a result, the warning is no longer displayed in the mentioned scenario. (BZ# 1282504 ) Re-enrolling a token using a G&D smart card no longer fails Previously, when re-enrolling a token using a Giesecke & Devrient (G&D) smart card, the enrollment of the token could fail in certain situations. The problem has been fixed, and as a result, re-enrolling a token works as expected. (BZ#1404881) PKI Server provides more detailed information about certificate validation errors on startup Previously, PKI Server did not provide sufficient information if a certificate validation error occurred when the server was started. Consequently, troubleshooting the problem was difficult. PKI Server now uses the new Java security services (JSS) API which provides more detailed information about the cause of the error in the mentioned scenario. (BZ# 1330800 ) PKI Server no longer fails to re-initialize the LDAPProfileSubsystem profile Due to a race condition during re-initializing the LDAPProfileSubsystem profile, PKI Server previously could incorrectly reported that the requested profile does not exist. Consequently, requests to use the profile could fail. The problem has been fixed, and requests to use the profile no longer fail. (BZ# 1376226 ) Extracting private keys generated on an HSM no longer fails Previously, when generating asymmetric keys on a Lunasa or Thales hardware security module (HSM) using the new Asymmetric Key Generation REST service on the key recovery agent (KRA), PKI Server set incorrect flags. As a consequence, users were unable to retrieve the generated private keys. The code has been updated to set the correct flags for keys generated on these HSMs. As a result, users can now retrieve private keys in the mentioned scenario. (BZ# 1386303 ) pkispawn no longer generates passwords consisting only of digits Previously, pkispawn could generate a random password for NSS database consisting only digits. Such passwords are not FIPS-compliant. With this update, the installer has been modified to generate FIPS-compliant random passwords which consist of a mix of digits, lowercase letters, uppercase letters, and certain punctuation marks. (BZ# 1400149 ) CA certificates are now imported with correct trust flags Previously, the pki client-cert-import command imported CA certificates with CT,c, trust flags, which was insufficient and inconsistent with other PKI tools. With this update, the command has been fixed and now sets the trust flags for CA certificates to CT,C,C . (BZ# 1458429 ) Generating a symmetric key no longer fails when using the --usage verify option The pki utility checks a list of valid usages for the symmetric key to be generated. Previously, this list was missing the verify usage. As a consequence, using the key-generate --usage verify option returned an error message. The code has been fixed, and now the verify option works as expected. (BZ#1238684) Subsequent PKI installation no longer fails Previously, when installing multiple public key infrastructure (PKI) instances in batch mode, the installation script did not wait until the CA instance was restarted. As a consequence, the installation of subsequent PKI instances could fail. The script has been updated and now waits until the new subsystem is ready to handle requests before it continues. (BZ# 1446364 ) Two-step subordinate CA installation in FIPS mode no longer fails Previously, a bug in subordinate CA installation in FIPS mode caused two-step installations to fail because the installer required the instance to not exist in the second step. This update changes the workflow so that the first step (installation) requires the instance to not exist, and the second step (configuration) requires the instance to exist. Two new options, "--skip-configuration` and --skip-installation , have been added to the pkispawn command to replace the pki_skip_configuration and pki_skip_installation deployment parameters. This allows you to use the same deployment configuration file for both steps without modifications. (BZ#1454450) The audit log no longer records success when a certificate request was rejected or canceled Previously when a certificate request was rejected or canceled, the server generated a CERT_REQUEST_PROCESSED audit log entry with Outcome=Success . This was incorrect because there was no certificate issued for the request. This bug has been fixed, and the CERT_REQUEST_PROCESSED audit log entry for a rejected or canceled request now reads Outcome=Failure . (BZ# 1452250 ) PKI subsystems which failed self tests are now automatically re-enabled on startup Previously, if a PKI subsystem failed to start due to self test failure, it was automatically disabled to prevent it from running in an inconsistent state. The administrator was expected to re-enable the subsystem manually using pki-server subsystem-enable after fixing the problem. However, this was not clearly communicated, potentially causing confusion among administrators who were not always aware of this requirement. To alleviate this problem, all PKI subsystems are now re-enabled automatically on startup by default. If a self-test fails, the subsystem is disabled as before, but it will no longer require manual re-enabling. This behavior is controlled by a new boolean option in the /etc/pki/pki.conf file, PKI_SERVER_AUTO_ENABLE_SUBSYSTEMS . (BZ# 1454471 ) CERT_REQUEST_PROCESSED audit log entries now include certificate serial number instead of encoded data Previously, CERT_REQUEST_PROCESSED audit log entries included Base64-encoded certificate data. For example: This information was not very useful because the certificate data would have to be decoded separately. The code has been changed to include the certificate serial number directly into the log entry, as shown in the following example: (BZ# 1452344 ) Updating the LDAPProfileSubsystem profile now supports removing attributes Previously, when updating the LDAPProfileSubsystem profile on PKI Server, attributes could not be removed. As a result, PKI Server was unable to load the profile or issue certificates after updating the profile in certain situations. A patch has been applied, and now PKI Server clears the existing profile configuration before loading the new configuration. As a result, updates in the LDAPProfileSubsystem profile can now remove configuration attributes. (BZ# 1445088 )
[ "Installation failed: [Errno 2] No such file or directory", "[AuditEvent=CERT_REQUEST_PROCESSED]...[InfoName=certificate][InfoValue=MIIDBD...]", "[AuditEvent=CERT_REQUEST_PROCESSED]...[CertSerialNum=7]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/bug_fixes_authentication_and_interoperability
Chapter 10. Subscriptions
Chapter 10. Subscriptions For information about keeping your automation controller subscription in compliance, see Troubleshooting: Keep your subscription in compliance in the Automation Controller User Guide.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/troubleshooting_ansible_automation_platform/troubleshoot-subscriptions
Chapter 4. Supported platforms
Chapter 4. Supported platforms This section describes the different server platforms, hardware, tokens, and software supported by Red Hat Certificate System 10. 4.1. General requirements The minimal and recommended hardware for Red Hat Certificate System 10 are as follows: Minimal requirements CPU: 2 threads RAM: 2 GB Disk space: 20 GB The minimal requirements are based on the Red Hat Enterprise Linux 8 minimal requirements. For more information, see Red Hat Enterprise Linux technology capabilities and limits . Recommended requirements CPU: 4 or more threads, AES-NI support RAM: 8 GB or more Disk space: 80 GB or more 4.2. Server support See Chapter 6, Prerequisites for installation for supported system information. 4.3. Supported web browsers The only fully-tested browser is Mozilla Firefox, and to some extent, Chrome. However, in general, newer versions of browsers on major OS platforms are likely to work. 4.4. Supported Hardware Security Modules The following table lists Hardware Security Modules (HSM) supported by Red Hat Certificate System: HSM Firmware Appliance Software Client Software nCipher nShield Connect XC nShield_HSM_Firmware-12.72.1 12.71.0 SecWorld_Lin64-12.71.0 Thales TCT Luna Network HSM T-5000 with Luna-T7 internal card lunafw_update-7.11.1-4 7.11.0-25 LunaClient-7.11.1-5 Note While the Common Criteria evaluation tested using this Entrust HSM, any HSM is considered equivalent when it is at least FIPS 140-2 validated, provides PKCS#11 3.0 cryptographic services or higher, hardware protection for keys and supports the required algorithms. Some tokens that do not follow the PKCS #11 3.0 semantics will fail. For instance, some tokens do not properly support CKA_ID, which is a requirement for RHCS certificate and key provisioning of the token. NOTE Limited support for Thales Luna: Red Hat was not able to confirm that the Thales HSM unit supports AES key wrapping/unwrapping via OAEP. Please be aware that those features requiring support of this algorithm will not function without such support. These features include: KRA: key archival and recovery CMC SharedToken authentication mechanism for enrollments TKS TPS shared secret automatic transport during installation It is, however, observed that workarounds may be employed for some of these features, but at the cost of degraded security level or operational inconvenience. Another example is that a certain Safenet Luna model supports PKI private key extraction in its CKE - Key Export model, and only in non-FIPS mode. The Luna Cloning model and the CKE model in FIPS mode do not support PKI private key extraction.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/supported_platforms
function::json_add_array_numeric_metric
function::json_add_array_numeric_metric Name function::json_add_array_numeric_metric - Add a numeric metric to an array Synopsis Arguments array_name The name of the array the numeric metric should be added to. metric_name The name of the numeric metric. metric_description Metric description. An empty string can be used. metric_units Metic units. An empty string can be used. Description This function adds a numeric metric to an array, setting up everything needed.
[ "json_add_array_numeric_metric:long(array_name:string,metric_name:string,metric_description:string,metric_units:string)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-json-add-array-numeric-metric
Chapter 2. Accessing hosts
Chapter 2. Accessing hosts Learn how to create a bastion host to access OpenShift Container Platform instances and access the control plane nodes with secure shell (SSH) access. 2.1. Accessing hosts on Amazon Web Services in an installer-provisioned infrastructure cluster The OpenShift Container Platform installer does not create any public IP addresses for any of the Amazon Elastic Compute Cloud (Amazon EC2) instances that it provisions for your OpenShift Container Platform cluster. To be able to SSH to your OpenShift Container Platform hosts, you must follow this procedure. Procedure Create a security group that allows SSH access into the virtual private cloud (VPC) created by the openshift-install command. Create an Amazon EC2 instance on one of the public subnets the installer created. Associate a public IP address with the Amazon EC2 instance that you created. Unlike with the OpenShift Container Platform installation, you should associate the Amazon EC2 instance you created with an SSH keypair. It does not matter what operating system you choose for this instance, as it will simply serve as an SSH bastion to bridge the internet into your OpenShift Container Platform cluster's VPC. The Amazon Machine Image (AMI) you use does matter. With Red Hat Enterprise Linux CoreOS (RHCOS), for example, you can provide keys via Ignition, like the installer does. After you provisioned your Amazon EC2 instance and can SSH into it, you must add the SSH key that you associated with your OpenShift Container Platform installation. This key can be different from the key for the bastion instance, but does not have to be. Note Direct SSH access is only recommended for disaster recovery. When the Kubernetes API is responsive, run privileged pods instead. Run oc get nodes , inspect the output, and choose one of the nodes that is a master. The hostname looks similar to ip-10-0-1-163.ec2.internal . From the bastion SSH host you manually deployed into Amazon EC2, SSH into that control plane host. Ensure that you use the same SSH key you specified during the installation: USD ssh -i <ssh-key-path> core@<master-hostname>
[ "ssh -i <ssh-key-path> core@<master-hostname>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/accessing-hosts
1.5. Pacemaker Configuration and Management Tools
1.5. Pacemaker Configuration and Management Tools Pacemaker features two configuration tools for cluster deployment, monitoring, and management. pcs pcs can control all aspects of Pacemaker and the Corosync heartbeat daemon. A command-line based program, pcs can perform the following cluster management tasks: Create and configure a Pacemaker/Corosync cluster Modify configuration of the cluster while it is running Remotely configure both Pacemaker and Corosync remotely as well as start, stop, and display status information of the cluster pcsd Web UI A graphical user interface to create and configure Pacemaker/Corosync clusters, with the same features and abilities as the command-line based pcs utility.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_overview/s1-Pacemakertools-HAAO
Chapter 12. Performance and reliability tuning
Chapter 12. Performance and reliability tuning 12.1. Flow control mechanisms If logs are produced faster than they can be collected, it can be difficult to predict or control the volume of logs being sent to an output. Not being able to predict or control the volume of logs being sent to an output can result in logs being lost. If there is a system outage and log buffers are accumulated without user control, this can also cause long recovery times and high latency when the connection is restored. As an administrator, you can limit logging rates by configuring flow control mechanisms for your logging. 12.1.1. Benefits of flow control mechanisms The cost and volume of logging can be predicted more accurately in advance. Noisy containers cannot produce unbounded log traffic that drowns out other containers. Ignoring low-value logs reduces the load on the logging infrastructure. High-value logs can be preferred over low-value logs by assigning higher rate limits. 12.1.2. Configuring rate limits Rate limits are configured per collector, which means that the maximum rate of log collection is the number of collector instances multiplied by the rate limit. Because logs are collected from each node's file system, a collector is deployed on each cluster node. For example, in a 3-node cluster, with a maximum rate limit of 10 records per second per collector, the maximum rate of log collection is 30 records per second. Because the exact byte size of a record as written to an output can vary due to transformations, different encodings, or other factors, rate limits are set in number of records instead of bytes. You can configure rate limits in the ClusterLogForwarder custom resource (CR) in two ways: Output rate limit Limit the rate of outbound logs to selected outputs, for example, to match the network or storage capacity of an output. The output rate limit controls the aggregated per-output rate. Input rate limit Limit the per-container rate of log collection for selected containers. 12.1.3. Configuring log forwarder output rate limits You can limit the rate of outbound logs to a specified output by configuring the ClusterLogForwarder custom resource (CR). Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. Procedure Add a maxRecordsPerSecond limit value to the ClusterLogForwarder CR for a specified output. The following example shows how to configure a per collector output rate limit for a Kafka broker output named kafka-example : Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: # ... outputs: - name: kafka-example 1 type: kafka 2 limit: maxRecordsPerSecond: 1000000 3 # ... 1 The output name. 2 The type of output. 3 The log output rate limit. This value sets the maximum Quantity of logs that can be sent to the Kafka broker per second. This value is not set by default. The default behavior is best effort, and records are dropped if the log forwarder cannot keep up. If this value is 0 , no logs are forwarded. Apply the ClusterLogForwarder CR: Example command USD oc apply -f <filename>.yaml Additional resources Log output types 12.1.4. Configuring log forwarder input rate limits You can limit the rate of incoming logs that are collected by configuring the ClusterLogForwarder custom resource (CR). You can set input limits on a per-container or per-namespace basis. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. Procedure Add a maxRecordsPerSecond limit value to the ClusterLogForwarder CR for a specified input. The following examples show how to configure input rate limits for different scenarios: Example ClusterLogForwarder CR that sets a per-container limit for containers with certain labels apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: # ... inputs: - name: <input_name> 1 application: selector: matchLabels: { example: label } 2 containerLimit: maxRecordsPerSecond: 0 3 # ... 1 The input name. 2 A list of labels. If these labels match labels that are applied to a pod, the per-container limit specified in the maxRecordsPerSecond field is applied to those containers. 3 Configures the rate limit. Setting the maxRecordsPerSecond field to 0 means that no logs are collected for the container. Setting the maxRecordsPerSecond field to some other value means that a maximum of that number of records per second are collected for the container. Example ClusterLogForwarder CR that sets a per-container limit for containers in selected namespaces apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: # ... inputs: - name: <input_name> 1 application: namespaces: [ example-ns-1, example-ns-2 ] 2 containerLimit: maxRecordsPerSecond: 10 3 - name: <input_name> application: namespaces: [ test ] containerLimit: maxRecordsPerSecond: 1000 # ... 1 The input name. 2 A list of namespaces. The per-container limit specified in the maxRecordsPerSecond field is applied to all containers in the namespaces listed. 3 Configures the rate limit. Setting the maxRecordsPerSecond field to 10 means that a maximum of 10 records per second are collected for each container in the namespaces listed. Apply the ClusterLogForwarder CR: Example command USD oc apply -f <filename>.yaml
[ "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputs: - name: kafka-example 1 type: kafka 2 limit: maxRecordsPerSecond: 1000000 3", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: selector: matchLabels: { example: label } 2 containerLimit: maxRecordsPerSecond: 0 3", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: namespaces: [ example-ns-1, example-ns-2 ] 2 containerLimit: maxRecordsPerSecond: 10 3 - name: <input_name> application: namespaces: [ test ] containerLimit: maxRecordsPerSecond: 1000", "oc apply -f <filename>.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/logging/performance-and-reliability-tuning
Client Configuration Guide for Red Hat Insights with FedRAMP
Client Configuration Guide for Red Hat Insights with FedRAMP Red Hat Insights 1-latest Configuration options and use cases for the Insights client Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/client_configuration_guide_for_red_hat_insights_with_fedramp/index
Chapter 99. Exec Component
Chapter 99. Exec Component Available as of Camel version 2.3 The exec component can be used to execute system commands. 99.1. Dependencies Maven users need to add the following dependency to their pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-exec</artifactId> <version>USD{camel-version}</version> </dependency> where USD{camel-version } must be replaced by the actual version of Camel (2.3.0 or higher). 99.2. URI format exec://executable[?options] where executable is the name, or file path, of the system command that will be executed. If executable name is used (e.g. exec:java ), the executable must in the system path. 99.3. URI options The Exec component has no options. The Exec endpoint is configured using URI syntax: with the following path and query parameters: 99.3.1. Path Parameters (1 parameters): Name Description Default Type executable Required Sets the executable to be executed. The executable must not be empty or null. String 99.3.2. Query Parameters (8 parameters): Name Description Default Type args (producer) The arguments may be one or many whitespace-separated tokens. String binding (producer) A reference to a org.apache.commons.exec.ExecBinding in the Registry. ExecBinding commandExecutor (producer) A reference to a org.apache.commons.exec.ExecCommandExecutor in the Registry that customizes the command execution. The default command executor utilizes the commons-exec library, which adds a shutdown hook for every executed command. ExecCommandExecutor outFile (producer) The name of a file, created by the executable, that should be considered as its output. If no outFile is set, the standard output (stdout) of the executable will be used instead. String timeout (producer) The timeout, in milliseconds, after which the executable should be terminated. If execution has not completed within the timeout, the component will send a termination request. long useStderrOnEmptyStdout (producer) A boolean indicating that when stdout is empty, this component will populate the Camel Message Body with stderr. This behavior is disabled (false) by default. false boolean workingDir (producer) The directory in which the command should be executed. If null, the working directory of the current process will be used. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 99.4. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.exec.enabled Enable exec component true Boolean camel.component.exec.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 99.5. Message headers The supported headers are defined in org.apache.camel.component.exec.ExecBinding . Name Type Message Description ExecBinding.EXEC_COMMAND_EXECUTABLE String in The name of the system command that will be executed. Overrides executable in the URI. ExecBinding.EXEC_COMMAND_ARGS java.util.List<String> in Command-line arguments to pass to the executed process. The arguments are used literally - no quoting is applied. Overrides any existing args in the URI. ExecBinding.EXEC_COMMAND_ARGS String in Camel 2.5: The arguments of the executable as a Single string where each argument is whitespace separated (see args in URI option). The arguments are used literally, no quoting is applied. Overrides any existing args in the URI. ExecBinding.EXEC_COMMAND_OUT_FILE String in The name of a file, created by the executable, that should be considered as its output. Overrides any existing outFile in the URI. ExecBinding.EXEC_COMMAND_TIMEOUT long in The timeout, in milliseconds, after which the executable should be terminated. Overrides any existing timeout in the URI. ExecBinding.EXEC_COMMAND_WORKING_DIR String in The directory in which the command should be executed. Overrides any existing workingDir in the URI. ExecBinding.EXEC_EXIT_VALUE int out The value of this header is the exit value of the executable. Non-zero exit values typically indicate abnormal termination. Note that the exit value is OS-dependent. ExecBinding.EXEC_STDERR java.io.InputStream out The value of this header points to the standard error stream (stderr) of the executable. If no stderr is written, the value is null . ExecBinding.EXEC_USE_STDERR_ON_EMPTY_STDOUT boolean in Indicates that when stdout is empty, this component will populate the Camel Message Body with stderr . This behavior is disabled ( false ) by default. 99.6. Message body If the Exec component receives an in message body that is convertible to java.io.InputStream , it is used to feed input to the executable via its stdin. After execution, the message body is the result of the execution,- that is, an org.apache.camel.components.exec.ExecResult instance containing the stdout, stderr, exit value, and out file. This component supports the following ExecResult type converters for convenience: From To ExecResult java.io.InputStream ExecResult String ExecResult byte [] ExecResult org.w3c.dom.Document If an out file is specified (in the endpoint via outFile or the message headers via ExecBinding.EXEC_COMMAND_OUT_FILE ), converters will return the content of the out file. If no out file is used, then this component will convert the stdout of the process to the target type. For more details, please refer to the usage examples below. 99.7. Usage examples 99.7.1. Executing word count (Linux) The example below executes wc (word count, Linux) to count the words in file /usr/share/dict/words . The word count (output) is written to the standard output stream of wc . from("direct:exec") .to("exec:wc?args=--words /usr/share/dict/words") .process(new Processor() { public void process(Exchange exchange) throws Exception { // By default, the body is ExecResult instance assertIsInstanceOf(ExecResult.class, exchange.getIn().getBody()); // Use the Camel Exec String type converter to convert the ExecResult to String // In this case, the stdout is considered as output String wordCountOutput = exchange.getIn().getBody(String.class); // do something with the word count } }); 99.7.2. Executing java The example below executes java with 2 arguments: -server and -version , provided that java is in the system path. from("direct:exec") .to("exec:java?args=-server -version") The example below executes java in c:\temp with 3 arguments: -server , -version and the sytem property user.name . from("direct:exec") .to("exec:c:/program files/jdk/bin/java?args=-server -version -Duser.name=Camel&workingDir=c:/temp") 99.7.3. Executing Ant scripts The following example executes Apache Ant (Windows only) with the build file CamelExecBuildFile.xml , provided that ant.bat is in the system path, and that CamelExecBuildFile.xml is in the current directory. from("direct:exec") .to("exec:ant.bat?args=-f CamelExecBuildFile.xml") In the example, the ant.bat command redirects its output to CamelExecOutFile.txt with -l . The file CamelExecOutFile.txt is used as the out file with outFile=CamelExecOutFile.txt . The example assumes that ant.bat is in the system path, and that CamelExecBuildFile.xml is in the current directory. from("direct:exec") .to("exec:ant.bat?args=-f CamelExecBuildFile.xml -l CamelExecOutFile.txt&outFile=CamelExecOutFile.txt") .process(new Processor() { public void process(Exchange exchange) throws Exception { InputStream outFile = exchange.getIn().getBody(InputStream.class); assertIsInstanceOf(InputStream.class, outFile); // do something with the out file here } }); 99.7.4. Executing echo (Windows) Commands such as echo and dir can be executed only with the command interpreter of the operating system. This example shows how to execute such a command - echo - in Windows. from("direct:exec").to("exec:cmd?args=/C echo echoString") 99.8. See Also Configuring Camel Component Endpoint Getting Started
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-exec</artifactId> <version>USD{camel-version}</version> </dependency>", "exec://executable[?options]", "exec:executable", "from(\"direct:exec\") .to(\"exec:wc?args=--words /usr/share/dict/words\") .process(new Processor() { public void process(Exchange exchange) throws Exception { // By default, the body is ExecResult instance assertIsInstanceOf(ExecResult.class, exchange.getIn().getBody()); // Use the Camel Exec String type converter to convert the ExecResult to String // In this case, the stdout is considered as output String wordCountOutput = exchange.getIn().getBody(String.class); // do something with the word count } });", "from(\"direct:exec\") .to(\"exec:java?args=-server -version\")", "from(\"direct:exec\") .to(\"exec:c:/program files/jdk/bin/java?args=-server -version -Duser.name=Camel&workingDir=c:/temp\")", "from(\"direct:exec\") .to(\"exec:ant.bat?args=-f CamelExecBuildFile.xml\")", "from(\"direct:exec\") .to(\"exec:ant.bat?args=-f CamelExecBuildFile.xml -l CamelExecOutFile.txt&outFile=CamelExecOutFile.txt\") .process(new Processor() { public void process(Exchange exchange) throws Exception { InputStream outFile = exchange.getIn().getBody(InputStream.class); assertIsInstanceOf(InputStream.class, outFile); // do something with the out file here } });", "from(\"direct:exec\").to(\"exec:cmd?args=/C echo echoString\")" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/exec-component
Chapter 21. Direct
Chapter 21. Direct Both producer and consumer are supported The Direct component provides direct, synchronous invocation of any consumers when a producer sends a message exchange. This endpoint can be used to connect existing routes in the same camel context. Note Asynchronous The SEDA component provides asynchronous invocation of any consumers when a producer sends a message exchange. 21.1. URI format Where someName can be any string to uniquely identify the endpoint 21.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 21.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 21.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 21.3. Component Options The Direct component supports 5 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean block (producer) If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean timeout (producer) The timeout value to use if block is enabled. 30000 long autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 21.4. Endpoint Options The Direct endpoint is configured using URI syntax: with the following path and query parameters: 21.4.1. Path Parameters (1 parameters) Name Description Default Type name (common) Required Name of direct endpoint. String 21.4.2. Query Parameters (8 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern block (producer) If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true boolean failIfNoConsumers (producer) Whether the producer should fail by throwing an exception, when sending to a DIRECT endpoint with no active consumers. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean timeout (producer) The timeout value to use if block is enabled. 30000 long synchronous (advanced) Whether synchronous processing is forced. If enabled then the producer thread, will be forced to wait until the message has been completed before the same thread will continue processing. If disabled (default) then the producer thread may be freed and can do other work while the message is continued processed by other threads (reactive). false boolean 21.5. Samples In the route below we use the direct component to link the two routes together: from("activemq:queue:order.in") .to("bean:orderServer?method=validate") .to("direct:processOrder"); from("direct:processOrder") .to("bean:orderService?method=process") .to("activemq:queue:order.out"); And the sample using spring DSL: <route> <from uri="activemq:queue:order.in"/> <to uri="bean:orderService?method=validate"/> <to uri="direct:processOrder"/> </route> <route> <from uri="direct:processOrder"/> <to uri="bean:orderService?method=process"/> <to uri="activemq:queue:order.out"/> </route> See also samples from the SEDA component, how they can be used together. 21.6. Spring Boot Auto-Configuration When using direct with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-direct-starter</artifactId> </dependency> The component supports 6 options, which are listed below. Name Description Default Type camel.component.direct.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.direct.block If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true Boolean camel.component.direct.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.direct.enabled Whether to enable auto configuration of the direct component. This is enabled by default. Boolean camel.component.direct.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.direct.timeout The timeout value to use if block is enabled. 30000 Long
[ "direct:someName[?options]", "direct:name", "from(\"activemq:queue:order.in\") .to(\"bean:orderServer?method=validate\") .to(\"direct:processOrder\"); from(\"direct:processOrder\") .to(\"bean:orderService?method=process\") .to(\"activemq:queue:order.out\");", "<route> <from uri=\"activemq:queue:order.in\"/> <to uri=\"bean:orderService?method=validate\"/> <to uri=\"direct:processOrder\"/> </route> <route> <from uri=\"direct:processOrder\"/> <to uri=\"bean:orderService?method=process\"/> <to uri=\"activemq:queue:order.out\"/> </route>", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-direct-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-direct-component-starter
Chapter 72. server
Chapter 72. server This chapter describes the commands under the server command. 72.1. server add fixed ip Add fixed IP address to server Usage: Table 72.1. Positional arguments Value Summary <server> Server to receive the fixed ip address (name or id) <network> Network to allocate the fixed ip address from (name or ID) Table 72.2. Command arguments Value Summary -h, --help Show this help message and exit --fixed-ip-address <ip-address> Requested fixed ip address --tag <tag> Tag for the attached interface. (supported by --os- compute-api-version 2.52 or above) 72.2. server add floating ip Add floating IP address to server Usage: Table 72.3. Positional arguments Value Summary <server> Server to receive the floating ip address (name or id) <ip-address> Floating ip address to assign to the first available server port (IP only) Table 72.4. Command arguments Value Summary -h, --help Show this help message and exit --fixed-ip-address <ip-address> Fixed ip address to associate with this floating ip address. The first server port containing the fixed IP address will be used 72.3. server add network Add network to server Usage: Table 72.5. Positional arguments Value Summary <server> Server to add the network to (name or id) <network> Network to add to the server (name or id) Table 72.6. Command arguments Value Summary -h, --help Show this help message and exit --tag <tag> Tag for the attached interface. (supported by --os-compute-api- version 2.49 or above) 72.4. server add port Add port to server Usage: Table 72.7. Positional arguments Value Summary <server> Server to add the port to (name or id) <port> Port to add to the server (name or id) Table 72.8. Command arguments Value Summary -h, --help Show this help message and exit --tag <tag> Tag for the attached interface. (supported by api versions 2.49 - 2.latest ) 72.5. server add security group Add security group to server Usage: Table 72.9. Positional arguments Value Summary <server> Server (name or id) <group> Security group to add (name or id) Table 72.10. Command arguments Value Summary -h, --help Show this help message and exit 72.6. server add volume Add volume to server. Specify ``--os-compute-api-version 2.20`` or higher to add a volume to a server with status ``SHELVED`` or ``SHELVED_OFFLOADED``. Usage: Table 72.11. Positional arguments Value Summary <server> Server (name or id) <volume> Volume to add (name or id) Table 72.12. Command arguments Value Summary -h, --help Show this help message and exit --device <device> Server internal device name for volume --tag <tag> Tag for the attached volume (supported by --os- compute-api-version 2.49 or above) --enable-delete-on-termination Delete the volume when the server is destroyed (supported by --os-compute-api-version 2.79 or above) --disable-delete-on-termination Do not delete the volume when the server is destroyed (supported by --os-compute-api-version 2.79 or above) 72.7. server backup create Create a server backup image Usage: Table 72.13. Positional arguments Value Summary <server> Server to back up (name or id) Table 72.14. Command arguments Value Summary -h, --help Show this help message and exit --name <image-name> Name of the backup image (default: server name) --type <backup-type> Used to populate the backup_type property of the backup image (default: empty) --rotate <count> Number of backups to keep (default: 1) --wait Wait for backup image create to complete Table 72.15. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.16. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.17. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.8. server create Create a new server Usage: Table 72.19. Positional arguments Value Summary <server-name> New server name Table 72.20. Command arguments Value Summary -h, --help Show this help message and exit --flavor <flavor> Create server with this flavor (name or id) --image <image> Create server boot disk from this image (name or id) --image-property <key=value> Create server using the image that matches the specified property. Property must match exactly one property. --volume <volume> Create server using this volume as the boot disk (name or ID) This option automatically creates a block device mapping with a boot index of 0. On many hypervisors (libvirt/kvm for example) this will be device vda. Do not create a duplicate mapping using --block-device- mapping for this volume. --snapshot <snapshot> Create server using this snapshot as the boot disk (name or ID) This option automatically creates a block device mapping with a boot index of 0. On many hypervisors (libvirt/kvm for example) this will be device vda. Do not create a duplicate mapping using --block-device- mapping for this volume. --boot-from-volume <volume-size> When used in conjunction with the ``--image`` or ``--image-property`` option, this option automatically creates a block device mapping with a boot index of 0 and tells the compute service to create a volume of the given size (in GB) from the specified image and use it as the root disk of the server. The root volume will not be deleted when the server is deleted. This option is mutually exclusive with the ``--volume`` and ``--snapshot`` options. --block-device-mapping <dev-name=mapping> deprecated create a block device on the server. Block device mapping in the format <dev-name>=<id>:<type>:<size(GB)>:<delete-on- terminate> <dev-name>: block device name, like: vdb, xvdc (required) <id>: Name or ID of the volume, volume snapshot or image (required) <type>: volume, snapshot or image; default: volume (optional) <size(GB)>: volume size if create from image or snapshot (optional) <delete-on-terminate>: true or false; default: false (optional) Replaced by --block-device --block-device Create a block device on the server. Either a path to a JSON file or a CSV-serialized string describing the block device mapping. The following keys are accepted for both: uuid=<uuid>: UUID of the volume, snapshot or ID (required if using source image, snapshot or volume), source_type=<source_type>: source type (one of: image, snapshot, volume, blank), destination_typ=<destination_type>: destination type (one of: volume, local) (optional), disk_bus=<disk_bus>: device bus (one of: uml, lxc, virtio, ... ) (optional), device_type=<device_type>: device type (one of: disk, cdrom, etc. (optional), device_name=<device_name>: name of the device (optional), volume_size=<volume_size>: size of the block device in MiB (for swap) or GiB (for everything else) (optional), guest_format=<guest_format>: format of device (optional), boot_index=<boot_index>: index of disk used to order boot disk (required for volume-backed instances), delete_on_termination=<true|false>: whether to delete the volume upon deletion of server (optional), tag=<tag>: device metadata tag (optional), volume_type=<volume_type>: type of volume to create (name or ID) when source if blank, image or snapshot and dest is volume (optional) --swap <swap> Create and attach a local swap block device of <swap_size> MiB. --ephemeral <size=size[,format=format]> Create and attach a local ephemeral block device of <size> GiB and format it to <format>. --network <network> Create a nic on the server and connect it to network. Specify option multiple times to create multiple NICs. This is a wrapper for the --nic net-id=<network> parameter that provides simple syntax for the standard use case of connecting a new server to a given network. For more advanced use cases, refer to the -- nic parameter. --port <port> Create a nic on the server and connect it to port. Specify option multiple times to create multiple NICs. This is a wrapper for the --nic port-id=<port> parameter that provides simple syntax for the standard use case of connecting a new server to a given port. For more advanced use cases, refer to the --nic parameter. --nic <net-id=net-uuid,port-id=port-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,tag=tag,auto,none> Create a nic on the server. NIC in the format: net-id=<net-uuid>: attach NIC to network with this UUID, port-id=<port-uuid>: attach NIC to port with this UUID, v4-fixed-ip=<ip-addr>: IPv4 fixed address for NIC (optional), v6-fixed-ip=<ip-addr>: IPv6 fixed address for NIC (optional), tag: interface metadata tag (optional) (supported by --os-compute-api-version 2.43 or above), none: (v2.37+) no network is attached, auto: (v2.37+) the compute service will automatically allocate a network. Specify option multiple times to create multiple NICs. Specifying a --nic of auto or none cannot be used with any other --nic value. Either net-id or port-id must be provided, but not both. --password <password> Set the password to this server. this option requires cloud support. --security-group <security-group> Security group to assign to this server (name or id) (repeat option to set multiple groups) --key-name <key-name> Keypair to inject into this server --property <key=value> Set a property on this server (repeat option to set multiple values) --file <dest-filename=source-filename> File(s) to inject into image before boot (repeat option to set multiple files)(supported by --os- compute-api-version 2.57 or below) --user-data <user-data> User data file to serve from the metadata server --description <description> Set description for the server (supported by --os- compute-api-version 2.19 or above) --availability-zone <zone-name> Select an availability zone for the server. host and node are optional parameters. Availability zone in the format <zone-name>:<host-name>:<node-name>, <zone- name>::<node-name>, <zone-name>:<host-name> or <zone- name> --host <host> Requested host to create servers. (admin only) (supported by --os-compute-api-version 2.74 or above) --hypervisor-hostname <hypervisor-hostname> Requested hypervisor hostname to create servers. (admin only) (supported by --os-compute-api-version 2.74 or above) --hint <key=value> Hints for the scheduler --use-config-drive Enable config drive. --no-config-drive Disable config drive. --config-drive <config-drive-volume>|True deprecated use specified volume as the config drive, or True to use an ephemeral drive. Replaced by --use-config-drive . --min <count> Minimum number of servers to launch (default=1) --max <count> Maximum number of servers to launch (default=1) --tag <tag> Tags for the server. specify multiple times to add multiple tags. (supported by --os-compute-api-version 2.52 or above) --wait Wait for build to complete Table 72.21. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.22. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.23. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.24. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.9. server delete Delete server(s) Usage: Table 72.25. Positional arguments Value Summary <server> Server(s) to delete (name or id) Table 72.26. Command arguments Value Summary -h, --help Show this help message and exit --force Force delete server(s) --all-projects Delete server(s) in another project by name (admin only)(can be specified using the ALL_PROJECTS envvar) --wait Wait for delete to complete 72.10. server dump create Create a dump file in server(s) Trigger crash dump in server(s) with features like kdump in Linux. It will create a dump file in the server(s) dumping the server(s)' memory, and also crash the server(s). OSC sees the dump file (server dump) as a kind of resource. This command requires ``--os-compute-api- version`` 2.17 or greater. Usage: Table 72.27. Positional arguments Value Summary <server> Server(s) to create dump file (name or id) Table 72.28. Command arguments Value Summary -h, --help Show this help message and exit 72.11. server evacuate Evacuate a server to a different host. This command is used to recreate a server after the host it was on has failed. It can only be used if the compute service that manages the server is down. This command should only be used by an admin after they have confirmed that the instance is not running on the failed host. If the server instance was created with an ephemeral root disk on non-shared storage the server will be rebuilt using the original glance image preserving the ports and any attached data volumes. If the server uses boot for volume or has its root disk on shared storage the root disk will be preserved and reused for the evacuated instance on the new host. Usage: Table 72.29. Positional arguments Value Summary <server> Server (name or id) Table 72.30. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for evacuation to complete --host <host> Set the preferred host on which to rebuild the evacuated server. The host will be validated by the scheduler. (supported by --os-compute-api-version 2.29 or above) --password <password> Set the password on the evacuated instance. this option is mutually exclusive with the --shared-storage option. This option requires cloud support. --shared-storage Indicate that the instance is on shared storage. this will be auto-calculated with --os-compute-api-version 2.14 and greater and should not be used with later microversions. This option is mutually exclusive with the --password option Table 72.31. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.32. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.33. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.12. server event list List recent events of a server. Specify ``--os-compute-api-version 2.21`` or higher to show events for a deleted server, specified by ID only. Usage: Table 72.35. Positional arguments Value Summary <server> Server to list events (name or id) Table 72.36. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --changes-since <changes-since> List only server events changed later or equal to a certain point of time. The provided time should be an ISO 8061 formatted time, e.g. ``2016-03-04T06:27:59Z``. (supported with --os- compute-api-version 2.58 or above) --changes-before <changes-before> List only server events changed earlier or equal to a certain point of time. The provided time should be an ISO 8061 formatted time, e.g. ``2016-03-04T06:27:59Z``. (supported with --os- compute-api-version 2.66 or above) --marker MARKER The last server event id of the page (supported by --os-compute-api-version 2.58 or above) --limit LIMIT Maximum number of server events to display (supported by --os-compute-api-version 2.58 or above) Table 72.37. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 72.38. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.39. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.40. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.13. server event show Show server event details. Specify ``--os-compute-api-version 2.21`` or higher to show event details for a deleted server, specified by ID only. Specify ``--os-compute-api-version 2.51`` or higher to show event details for non- admin users. Usage: Table 72.41. Positional arguments Value Summary <server> Server to show event details (name or id) <request-id> Request id of the event to show (id only) Table 72.42. Command arguments Value Summary -h, --help Show this help message and exit Table 72.43. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.44. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.45. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.46. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.14. server group create Create a new server group. Usage: Table 72.47. Positional arguments Value Summary <name> New server group name Table 72.48. Command arguments Value Summary -h, --help Show this help message and exit --policy <policy> Add a policy to <name> specify --os-compute-api- version 2.15 or higher for the soft-affinity or soft-anti-affinity policy. --rule <key=value> A rule for the policy. currently, only the max_server_per_host rule is supported for the anti- affinity policy. Table 72.49. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.50. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.51. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.52. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.15. server group delete Delete existing server group(s). Usage: Table 72.53. Positional arguments Value Summary <server-group> Server group(s) to delete (name or id) Table 72.54. Command arguments Value Summary -h, --help Show this help message and exit 72.16. server group list List all server groups. Usage: Table 72.55. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Display information from all projects (admin only) --long List additional fields in output --offset <offset> Index from which to start listing servers. this should typically be a factor of --limit. Display all servers groups if not specified. --limit <limit> Maximum number of server groups to display. if limit is greater than osapi_max_limit option of Nova API, osapi_max_limit will be used instead. Table 72.56. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 72.57. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.58. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.59. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.17. server group show Display server group details. Usage: Table 72.60. Positional arguments Value Summary <server-group> Server group to display (name or id) Table 72.61. Command arguments Value Summary -h, --help Show this help message and exit Table 72.62. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.63. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.64. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.65. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.18. server image create Create a new server disk image from an existing server Usage: Table 72.66. Positional arguments Value Summary <server> Server to create image (name or id) Table 72.67. Command arguments Value Summary -h, --help Show this help message and exit --name <image-name> Name of new disk image (default: server name) --property <key=value> Set a new property to meta_data.json on the metadata server (repeat option to set multiple values) --wait Wait for operation to complete Table 72.68. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.69. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.70. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.71. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.19. server list List servers Usage: Table 72.72. Command arguments Value Summary -h, --help Show this help message and exit --reservation-id <reservation-id> Only return instances that match the reservation --ip <ip-address-regex> Regular expression to match ip addresses --ip6 <ip-address-regex> Regular expression to match ipv6 addresses. note that this option only applies for non-admin users when using ``--os-compute-api-version`` 2.5 or greater. --name <name-regex> Regular expression to match names --instance-name <server-name> Regular expression to match instance name (admin only) --status <status> Search by server status --flavor <flavor> Search by flavor (name or id) --image <image> Search by image (name or id) --host <hostname> Search by hostname --all-projects Include all projects (admin only) (can be specified using the ALL_PROJECTS envvar) --project <project> Search by project (admin only) (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --user <user> Search by user (name or id) (admin only before microversion 2.83) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --deleted Only display deleted servers (admin only) --availability-zone AVAILABILITY_ZONE Search by availability zone (admin only before microversion 2.83) --key-name KEY_NAME Search by keypair name (admin only before microversion 2.83) --config-drive Only display servers with a config drive attached (admin only before microversion 2.83) --no-config-drive Only display servers without a config drive attached (admin only before microversion 2.83) --progress PROGRESS Search by progress value (%) (admin only before microversion 2.83) --vm-state <state> Search by vm_state value (admin only before microversion 2.83) --task-state <state> Search by task_state value (admin only before microversion 2.83) --power-state <state> Search by power_state value (admin only before microversion 2.83) --long List additional fields in output -n, --no-name-lookup Skip flavor and image name lookup. mutually exclusive with "--name-lookup-one-by-one" option. --name-lookup-one-by-one When looking up flavor and image names, look them upone by one as needed instead of all together (default). Mutually exclusive with "--no-name- lookup|-n" option. --marker <server> The last server of the page. display list of servers after marker. Display all servers if not specified. When used with ``--deleted``, the marker must be an ID, otherwise a name or ID can be used. --limit <num-servers> Maximum number of servers to display. if limit equals -1, all servers will be displayed. If limit is greater than osapi_max_limit option of Nova API, osapi_max_limit will be used instead. --changes-before <changes-before> List only servers changed before a certain point of time. The provided time should be an ISO 8061 formatted time (e.g., 2016-03-05T06:27:59Z). (supported by --os-compute-api-version 2.66 or above) --changes-since <changes-since> List only servers changed after a certain point of time. The provided time should be an ISO 8061 formatted time (e.g., 2016-03-04T06:27:59Z). --locked Only display locked servers (supported by --os- compute-api-version 2.73 or above) --unlocked Only display unlocked servers (supported by --os- compute-api-version 2.73 or above) --tags <tag> Only list servers with the specified tag. specify multiple times to filter on multiple tags. (supported by --os-compute-api-version 2.26 or above) --not-tags <tag> Only list servers without the specified tag. specify multiple times to filter on multiple tags. (supported by --os-compute-api-version 2.26 or above) Table 72.73. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 72.74. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.75. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.76. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.20. server lock Lock server(s). A non-admin user will not be able to execute actions Usage: Table 72.77. Positional arguments Value Summary <server> Server(s) to lock (name or id) Table 72.78. Command arguments Value Summary -h, --help Show this help message and exit --reason <reason> Reason for locking the server(s). requires ``--os- compute-api-version`` 2.73 or greater. 72.21. server migrate confirm DEPRECATED: Confirm server migration. Use server migration confirm instead. Usage: Table 72.79. Positional arguments Value Summary <server> Server (name or id) Table 72.80. Command arguments Value Summary -h, --help Show this help message and exit 72.22. server migrate revert Revert server migration. Use server migration revert instead. Usage: Table 72.81. Positional arguments Value Summary <server> Server (name or id) Table 72.82. Command arguments Value Summary -h, --help Show this help message and exit 72.23. server migrate Migrate server to different host. A migrate operation is implemented as a resize operation using the same flavor as the old server. This means that, like resize, migrate works by creating a new server using the same flavor and copying the contents of the original disk into a new one. As with resize, the migrate operation is a two-step process for the user: the first step is to perform the migrate, and the second step is to either confirm (verify) success and release the old server, or to declare a revert to release the new server and restart the old one. Usage: Table 72.83. Positional arguments Value Summary <server> Server (name or id) Table 72.84. Command arguments Value Summary -h, --help Show this help message and exit --live-migration Live migrate the server; use the ``--host`` option to specify a target host for the migration which will be validated by the scheduler --host <hostname> Migrate the server to the specified host. (supported with --os-compute-api-version 2.30 or above when used with the --live-migration option) (supported with --os-compute-api-version 2.56 or above when used without the --live-migration option) --shared-migration Perform a shared live migration (default before --os- compute-api-version 2.25, auto after) --block-migration Perform a block live migration (auto-configured from --os-compute-api-version 2.25) --disk-overcommit Allow disk over-commit on the destination host(supported with --os-compute-api-version 2.24 or below) --no-disk-overcommit Do not over-commit disk on the destination host (default)(supported with --os-compute-api-version 2.24 or below) --wait Wait for migrate to complete 72.24. server migration abort Cancel an ongoing live migration. This command requires ``--os-compute-api- version`` 2.24 or greater. Usage: Table 72.85. Positional arguments Value Summary <server> Server (name or id) <migration> Migration (id) Table 72.86. Command arguments Value Summary -h, --help Show this help message and exit 72.25. server migration confirm Confirm server migration. Confirm (verify) success of the migration operation and release the old server. Usage: Table 72.87. Positional arguments Value Summary <server> Server (name or id) Table 72.88. Command arguments Value Summary -h, --help Show this help message and exit 72.26. server migration force complete Force an ongoing live migration to complete. This command requires ``--os- compute-api-version`` 2.22 or greater. Usage: Table 72.89. Positional arguments Value Summary <server> Server (name or id) <migration> Migration (id) Table 72.90. Command arguments Value Summary -h, --help Show this help message and exit 72.27. server migration list List server migrations Usage: Table 72.91. Command arguments Value Summary -h, --help Show this help message and exit --server <server> Filter migrations by server (name or id) --host <host> Filter migrations by source or destination host --status <status> Filter migrations by status --type <type> Filter migrations by type --marker <marker> The last migration of the page; displays list of migrations after marker . Note that the marker is the migration UUID. (supported with --os-compute-api- version 2.59 or above) --limit <limit> Maximum number of migrations to display. note that there is a configurable max limit on the server, and the limit that is used will be the minimum of what is requested here and what is configured in the server. (supported with --os-compute-api-version 2.59 or above) --changes-since <changes-since> List only migrations changed later or equal to a certain point of time. The provided time should be an ISO 8061 formatted time, e.g. ``2016-03-04T06:27:59Z``. (supported with --os- compute-api-version 2.59 or above) --changes-before <changes-before> List only migrations changed earlier or equal to a certain point of time. The provided time should be an ISO 8061 formatted time, e.g. ``2016-03-04T06:27:59Z``. (supported with --os- compute-api-version 2.66 or above) --project <project> Filter migrations by project (name or id) (supported with --os-compute-api-version 2.80 or above) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --user <user> Filter migrations by user (name or id) (supported with --os-compute-api-version 2.80 or above) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. Table 72.92. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 72.93. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.94. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.95. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.28. server migration revert Revert server migration. Revert the migration operation. Release the new server and restart the old one. Usage: Table 72.96. Positional arguments Value Summary <server> Server (name or id) Table 72.97. Command arguments Value Summary -h, --help Show this help message and exit 72.29. server migration show Show a migration for a given server. Usage: Table 72.98. Positional arguments Value Summary <server> Server (name or id) <migration> Migration (id) Table 72.99. Command arguments Value Summary -h, --help Show this help message and exit 72.30. server pause Pause server(s) Usage: Table 72.100. Positional arguments Value Summary <server> Server(s) to pause (name or id) Table 72.101. Command arguments Value Summary -h, --help Show this help message and exit 72.31. server reboot Perform a hard or soft server reboot Usage: Table 72.102. Positional arguments Value Summary <server> Server (name or id) Table 72.103. Command arguments Value Summary -h, --help Show this help message and exit --hard Perform a hard reboot --soft Perform a soft reboot --wait Wait for reboot to complete 72.32. server rebuild Rebuild server Usage: Table 72.104. Positional arguments Value Summary <server> Server (name or id) Table 72.105. Command arguments Value Summary -h, --help Show this help message and exit --image <image> Recreate server from the specified image (name or ID).Defaults to the currently used one. --name <name> Set the new name of the rebuilt server --password <password> Set the password on the rebuilt server. this option requires cloud support. --property <key=value> Set a new property on the rebuilt server (repeat option to set multiple values) --description <description> Set a new description on the rebuilt server (supported by --os-compute-api-version 2.19 or above) --preserve-ephemeral Preserve the default ephemeral storage partition on rebuild. --no-preserve-ephemeral Do not preserve the default ephemeral storage partition on rebuild. --key-name <key-name> Set the key name of key pair on the rebuilt server. Cannot be specified with the --key-unset option. (supported by --os-compute-api-version 2.54 or above) --no-key-name Unset the key name of key pair on the rebuilt server. Cannot be specified with the --key-name option. (supported by --os-compute-api-version 2.54 or above) --user-data <user-data> Add a new user data file to the rebuilt server. cannot be specified with the --no-user-data option. (supported by --os-compute-api-version 2.57 or above) --no-user-data Remove existing user data when rebuilding server. Cannot be specified with the --user-data option. (supported by --os-compute-api-version 2.57 or above) --trusted-image-cert <trusted-cert-id> Trusted image certificate ids used to validate certificates during the image signature verification process. Defaults to env[OS_TRUSTED_IMAGE_CERTIFICATE_IDS]. May be specified multiple times to pass multiple trusted image certificate IDs. Cannot be specified with the --no-trusted-certs option. (supported by --os-compute- api-version 2.63 or above) --no-trusted-image-certs Remove any existing trusted image certificates from the server. Cannot be specified with the --trusted- certs option. (supported by --os-compute-api-version 2.63 or above) --wait Wait for rebuild to complete Table 72.106. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.107. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.108. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.109. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.33. server remove fixed ip Remove fixed IP address from server Usage: Table 72.110. Positional arguments Value Summary <server> Server to remove the fixed ip address from (name or id) <ip-address> Fixed ip address to remove from the server (ip only) Table 72.111. Command arguments Value Summary -h, --help Show this help message and exit 72.34. server remove floating ip Remove floating IP address from server Usage: Table 72.112. Positional arguments Value Summary <server> Server to remove the floating ip address from (name or id) <ip-address> Floating ip address to remove from server (ip only) Table 72.113. Command arguments Value Summary -h, --help Show this help message and exit 72.35. server remove network Remove all ports of a network from server Usage: Table 72.114. Positional arguments Value Summary <server> Server to remove the port from (name or id) <network> Network to remove from the server (name or id) Table 72.115. Command arguments Value Summary -h, --help Show this help message and exit 72.36. server remove port Remove port from server Usage: Table 72.116. Positional arguments Value Summary <server> Server to remove the port from (name or id) <port> Port to remove from the server (name or id) Table 72.117. Command arguments Value Summary -h, --help Show this help message and exit 72.37. server remove security group Remove security group from server Usage: Table 72.118. Positional arguments Value Summary <server> Name or id of server to use <group> Name or id of security group to remove from server Table 72.119. Command arguments Value Summary -h, --help Show this help message and exit 72.38. server remove volume Remove volume from server. Specify ``--os-compute-api-version 2.20`` or higher to remove a volume from a server with status ``SHELVED`` or ``SHELVED_OFFLOADED``. Usage: Table 72.120. Positional arguments Value Summary <server> Server (name or id) <volume> Volume to remove (name or id) Table 72.121. Command arguments Value Summary -h, --help Show this help message and exit 72.39. server rescue Put server in rescue mode Usage: Table 72.122. Positional arguments Value Summary <server> Server (name or id) Table 72.123. Command arguments Value Summary -h, --help Show this help message and exit --image <image> Image (name or id) to use for the rescue mode. Defaults to the currently used one. --password <password> Set the password on the rescued instance. this option requires cloud support. 72.40. server resize confirm Confirm server resize. Confirm (verify) success of resize operation and release the old server. Usage: Table 72.124. Positional arguments Value Summary <server> Server (name or id) Table 72.125. Command arguments Value Summary -h, --help Show this help message and exit 72.41. server resize revert Revert server resize. Revert the resize operation. Release the new server and restart the old one. Usage: Table 72.126. Positional arguments Value Summary <server> Server (name or id) Table 72.127. Command arguments Value Summary -h, --help Show this help message and exit 72.42. server resize Scale server to a new flavor. A resize operation is implemented by creating a new server and copying the contents of the original disk into a new one. It is a two-step process for the user: the first step is to perform the resize, and the second step is to either confirm (verify) success and release the old server or to declare a revert to release the new server and restart the old one. Usage: Table 72.128. Positional arguments Value Summary <server> Server (name or id) Table 72.129. Command arguments Value Summary -h, --help Show this help message and exit --flavor <flavor> Resize server to specified flavor --confirm Confirm server resize is complete --revert Restore server state before resize --wait Wait for resize to complete 72.43. server restore Restore server(s) Usage: Table 72.130. Positional arguments Value Summary <server> Server(s) to restore (name or id) Table 72.131. Command arguments Value Summary -h, --help Show this help message and exit 72.44. server resume Resume server(s) Usage: Table 72.132. Positional arguments Value Summary <server> Server(s) to resume (name or id) Table 72.133. Command arguments Value Summary -h, --help Show this help message and exit 72.45. server set Set server properties Usage: Table 72.134. Positional arguments Value Summary <server> Server (name or id) Table 72.135. Command arguments Value Summary -h, --help Show this help message and exit --name <new-name> New server name --password PASSWORD Set the server password. this option requires cloud support. --no-password Clear the admin password for the server from the metadata service; note that this action does not actually change the server password --property <key=value> Property to add/change for this server (repeat option to set multiple properties) --state <state> New server state (valid value: active, error) --description <description> New server description (supported by --os-compute-api- version 2.19 or above) --tag <tag> Tag for the server. specify multiple times to add multiple tags. (supported by --os-compute-api-version 2.26 or above) 72.46. server shelve Shelve and optionally offload server(s). Shelving a server creates a snapshot of the server and stores this snapshot before shutting down the server. This shelved server can then be offloaded or deleted from the host, freeing up remaining resources on the host, such as network interfaces. Shelved servers can be unshelved, restoring the server from the snapshot. Shelving is therefore useful where users wish to retain the UUID and IP of a server, without utilizing other resources or disks. Most clouds are configured to automatically offload shelved servers immediately or after a small delay. For clouds where this is not configured, or where the delay is larger, offloading can be manually specified. This is an admin-only operation by default. Usage: Table 72.136. Positional arguments Value Summary <server> Server(s) to shelve (name or id) Table 72.137. Command arguments Value Summary -h, --help Show this help message and exit --offload Remove the shelved server(s) from the host (admin only). Invoking this option on an unshelved server(s) will result in the server being shelved first --wait Wait for shelve and/or offload operation to complete 72.47. server show Show server details. Specify ``--os-compute-api-version 2.47`` or higher to see the embedded flavor information for the server. Usage: Table 72.138. Positional arguments Value Summary <server> Server (name or id) Table 72.139. Command arguments Value Summary -h, --help Show this help message and exit --diagnostics Display server diagnostics information --topology Include topology information in the output (supported by --os-compute-api-version 2.78 or above) Table 72.140. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.141. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.142. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.143. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.48. server ssh SSH to server Usage: Table 72.144. Positional arguments Value Summary <server> Server (name or id) Table 72.145. Command arguments Value Summary -h, --help Show this help message and exit --login <login-name> Login name (ssh -l option) --port <port> Destination port (ssh -p option) --identity <keyfile> Private key file (ssh -i option) --option <config-options> Options in ssh_config(5) format (ssh -o option) -4 Use only ipv4 addresses -6 Use only ipv6 addresses --public Use public ip address --private Use private ip address --address-type <address-type> Use other ip address (public, private, etc) 72.49. server start Start server(s). Usage: Table 72.146. Positional arguments Value Summary <server> Server(s) to start (name or id) Table 72.147. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Start server(s) in another project by name (admin only)(can be specified using the ALL_PROJECTS envvar) 72.50. server stop Stop server(s). Usage: Table 72.148. Positional arguments Value Summary <server> Server(s) to stop (name or id) Table 72.149. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Stop server(s) in another project by name (admin only)(can be specified using the ALL_PROJECTS envvar) 72.51. server suspend Suspend server(s) Usage: Table 72.150. Positional arguments Value Summary <server> Server(s) to suspend (name or id) Table 72.151. Command arguments Value Summary -h, --help Show this help message and exit 72.52. server unlock Unlock server(s) Usage: Table 72.152. Positional arguments Value Summary <server> Server(s) to unlock (name or id) Table 72.153. Command arguments Value Summary -h, --help Show this help message and exit 72.53. server unpause Unpause server(s) Usage: Table 72.154. Positional arguments Value Summary <server> Server(s) to unpause (name or id) Table 72.155. Command arguments Value Summary -h, --help Show this help message and exit 72.54. server unrescue Restore server from rescue mode Usage: Table 72.156. Positional arguments Value Summary <server> Server (name or id) Table 72.157. Command arguments Value Summary -h, --help Show this help message and exit 72.55. server unset Unset server properties and tags Usage: Table 72.158. Positional arguments Value Summary <server> Server (name or id) Table 72.159. Command arguments Value Summary -h, --help Show this help message and exit --property <key> Property key to remove from server (repeat option to remove multiple values) --description Unset server description (supported by --os-compute-api- version 2.19 or above) --tag <tag> Tag to remove from the server. specify multiple times to remove multiple tags. (supported by --os-compute-api- version 2.26 or above) 72.56. server unshelve Unshelve server(s) Usage: Table 72.160. Positional arguments Value Summary <server> Server(s) to unshelve (name or id) Table 72.161. Command arguments Value Summary -h, --help Show this help message and exit --availability-zone AVAILABILITY_ZONE Name of the availability zone in which to unshelve a SHELVED_OFFLOADED server (supported by --os-compute- api-version 2.77 or above) --wait Wait for unshelve operation to complete 72.57. server volume list List all the volumes attached to a server. Usage: Table 72.162. Positional arguments Value Summary server Server to list volume attachments for (name or id) Table 72.163. Command arguments Value Summary -h, --help Show this help message and exit Table 72.164. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 72.165. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.166. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.167. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.58. server volume update Update a volume attachment on the server. Usage: Table 72.168. Positional arguments Value Summary server Server to update volume for (name or id) volume Volume (id) Table 72.169. Command arguments Value Summary -h, --help Show this help message and exit --delete-on-termination Delete the volume when the server is destroyed (supported by --os-compute-api-version 2.85 or above) --preserve-on-termination Preserve the volume when the server is destroyed (supported by --os-compute-api-version 2.85 or above)
[ "openstack server add fixed ip [-h] [--fixed-ip-address <ip-address>] [--tag <tag>] <server> <network>", "openstack server add floating ip [-h] [--fixed-ip-address <ip-address>] <server> <ip-address>", "openstack server add network [-h] [--tag <tag>] <server> <network>", "openstack server add port [-h] [--tag <tag>] <server> <port>", "openstack server add security group [-h] <server> <group>", "openstack server add volume [-h] [--device <device>] [--tag <tag>] [--enable-delete-on-termination | --disable-delete-on-termination] <server> <volume>", "openstack server backup create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <image-name>] [--type <backup-type>] [--rotate <count>] [--wait] <server>", "openstack server create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --flavor <flavor> (--image <image> | --image-property <key=value> | --volume <volume> | --snapshot <snapshot>) [--boot-from-volume <volume-size>] [--block-device-mapping <dev-name=mapping>] [--block-device] [--swap <swap>] [--ephemeral <size=size[,format=format]>] [--network <network>] [--port <port>] [--nic <net-id=net-uuid,port-id=port-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,tag=tag,auto,none>] [--password <password>] [--security-group <security-group>] [--key-name <key-name>] [--property <key=value>] [--file <dest-filename=source-filename>] [--user-data <user-data>] [--description <description>] [--availability-zone <zone-name>] [--host <host>] [--hypervisor-hostname <hypervisor-hostname>] [--hint <key=value>] [--use-config-drive | --no-config-drive | --config-drive <config-drive-volume>|True] [--min <count>] [--max <count>] [--tag <tag>] [--wait] <server-name>", "openstack server delete [-h] [--force] [--all-projects] [--wait] <server> [<server> ...]", "openstack server dump create [-h] <server> [<server> ...]", "openstack server evacuate [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--wait] [--host <host>] [--password <password> | --shared-storage] <server>", "openstack server event list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] [--changes-since <changes-since>] [--changes-before <changes-before>] [--marker MARKER] [--limit LIMIT] <server>", "openstack server event show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <server> <request-id>", "openstack server group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--policy <policy>] [--rule <key=value>] <name>", "openstack server group delete [-h] <server-group> [<server-group> ...]", "openstack server group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--long] [--offset <offset>] [--limit <limit>]", "openstack server group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <server-group>", "openstack server image create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <image-name>] [--property <key=value>] [--wait] <server>", "openstack server list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--reservation-id <reservation-id>] [--ip <ip-address-regex>] [--ip6 <ip-address-regex>] [--name <name-regex>] [--instance-name <server-name>] [--status <status>] [--flavor <flavor>] [--image <image>] [--host <hostname>] [--all-projects] [--project <project>] [--project-domain <project-domain>] [--user <user>] [--user-domain <user-domain>] [--deleted] [--availability-zone AVAILABILITY_ZONE] [--key-name KEY_NAME] [--config-drive | --no-config-drive] [--progress PROGRESS] [--vm-state <state>] [--task-state <state>] [--power-state <state>] [--long] [-n | --name-lookup-one-by-one] [--marker <server>] [--limit <num-servers>] [--changes-before <changes-before>] [--changes-since <changes-since>] [--locked | --unlocked] [--tags <tag>] [--not-tags <tag>]", "openstack server lock [-h] [--reason <reason>] <server> [<server> ...]", "openstack server migrate confirm [-h] <server>", "openstack server migrate revert [-h] <server>", "openstack server migrate [-h] [--live-migration] [--host <hostname>] [--shared-migration | --block-migration] [--disk-overcommit | --no-disk-overcommit] [--wait] <server>", "openstack server migration abort [-h] <server> <migration>", "openstack server migration confirm [-h] <server>", "openstack server migration force complete [-h] <server> <migration>", "openstack server migration list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--server <server>] [--host <host>] [--status <status>] [--type <type>] [--marker <marker>] [--limit <limit>] [--changes-since <changes-since>] [--changes-before <changes-before>] [--project <project>] [--project-domain <project-domain>] [--user <user>] [--user-domain <user-domain>]", "openstack server migration revert [-h] <server>", "openstack server migration show [-h] <server> <migration>", "openstack server pause [-h] <server> [<server> ...]", "openstack server reboot [-h] [--hard | --soft] [--wait] <server>", "openstack server rebuild [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--image <image>] [--name <name>] [--password <password>] [--property <key=value>] [--description <description>] [--preserve-ephemeral | --no-preserve-ephemeral] [--key-name <key-name> | --no-key-name] [--user-data <user-data> | --no-user-data] [--trusted-image-cert <trusted-cert-id> | --no-trusted-image-certs] [--wait] <server>", "openstack server remove fixed ip [-h] <server> <ip-address>", "openstack server remove floating ip [-h] <server> <ip-address>", "openstack server remove network [-h] <server> <network>", "openstack server remove port [-h] <server> <port>", "openstack server remove security group [-h] <server> <group>", "openstack server remove volume [-h] <server> <volume>", "openstack server rescue [-h] [--image <image>] [--password <password>] <server>", "openstack server resize confirm [-h] <server>", "openstack server resize revert [-h] <server>", "openstack server resize [-h] [--flavor <flavor> | --confirm | --revert] [--wait] <server>", "openstack server restore [-h] <server> [<server> ...]", "openstack server resume [-h] <server> [<server> ...]", "openstack server set [-h] [--name <new-name>] [--password PASSWORD | --no-password] [--property <key=value>] [--state <state>] [--description <description>] [--tag <tag>] <server>", "openstack server shelve [-h] [--offload] [--wait] <server> [<server> ...]", "openstack server show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--diagnostics | --topology] <server>", "openstack server ssh [-h] [--login <login-name>] [--port <port>] [--identity <keyfile>] [--option <config-options>] [-4 | -6] [--public | --private | --address-type <address-type>] <server>", "openstack server start [-h] [--all-projects] <server> [<server> ...]", "openstack server stop [-h] [--all-projects] <server> [<server> ...]", "openstack server suspend [-h] <server> [<server> ...]", "openstack server unlock [-h] <server> [<server> ...]", "openstack server unpause [-h] <server> [<server> ...]", "openstack server unrescue [-h] <server>", "openstack server unset [-h] [--property <key>] [--description] [--tag <tag>] <server>", "openstack server unshelve [-h] [--availability-zone AVAILABILITY_ZONE] [--wait] <server> [<server> ...]", "openstack server volume list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] server", "openstack server volume update [-h] [--delete-on-termination | --preserve-on-termination] server volume" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/server
Chapter 14. Configuring the Transactions Subsystem
Chapter 14. Configuring the Transactions Subsystem The transactions subsystem allows you to configure the Transaction Manager (TM) options, such as timeout values, transaction logging, statistics collection, and whether to use JTS. JBoss EAP provides transactional services using the Narayana framework. This framework leverages support for a broad range of transaction protocols based on standards, such as Jakarta Transactions, JTS, and Web Service transactions. For more information, see Managing Transactions on JBoss EAP .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/configuring_transactions_subsystem
Chapter 2. Creating a mirror registry with mirror registry for Red Hat OpenShift
Chapter 2. Creating a mirror registry with mirror registry for Red Hat OpenShift The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. If you already have a container image registry, such as Red Hat Quay, you can skip this section and go straight to Mirroring the OpenShift Container Platform image repository . 2.1. Prerequisites An OpenShift Container Platform subscription. Red Hat Enterprise Linux (RHEL) 8 and 9 with Podman 3.4.2 or later and OpenSSL installed. Fully qualified domain name for the Red Hat Quay service, which must resolve through a DNS server. Key-based SSH connectivity on the target host. SSH keys are automatically generated for local installs. For remote hosts, you must generate your own SSH keys. 2 or more vCPUs. 8 GB of RAM. About 12 GB for OpenShift Container Platform 4.16 release images, or about 358 GB for OpenShift Container Platform 4.16 release images and OpenShift Container Platform 4.16 Red Hat Operator images. Up to 1 TB per stream or more is suggested. Important These requirements are based on local testing results with only release images and Operator images. Storage requirements can vary based on your organization's needs. You might require more space, for example, when you mirror multiple z-streams. You can use standard Red Hat Quay functionality or the proper API callout to remove unnecessary images and free up space. 2.2. Mirror registry for Red Hat OpenShift introduction For disconnected deployments of OpenShift Container Platform, a container registry is required to carry out the installation of the clusters. To run a production-grade registry service on such a cluster, you must create a separate registry deployment to install the first cluster. The mirror registry for Red Hat OpenShift addresses this need and is included in every OpenShift subscription. It is available for download on the OpenShift console Downloads page. The mirror registry for Red Hat OpenShift allows users to install a small-scale version of Red Hat Quay and its required components using the mirror-registry command line interface (CLI) tool. The mirror registry for Red Hat OpenShift is deployed automatically with preconfigured local storage and a local database. It also includes auto-generated user credentials and access permissions with a single set of inputs and no additional configuration choices to get started. The mirror registry for Red Hat OpenShift provides a pre-determined network configuration and reports deployed component credentials and access URLs upon success. A limited set of optional configuration inputs like fully qualified domain name (FQDN) services, superuser name and password, and custom TLS certificates are also provided. This provides users with a container registry so that they can easily create an offline mirror of all OpenShift Container Platform release content when running OpenShift Container Platform in restricted network environments. Use of the mirror registry for Red Hat OpenShift is optional if another container registry is already available in the install environment. 2.2.1. Mirror registry for Red Hat OpenShift limitations The following limitations apply to the mirror registry for Red Hat OpenShift : The mirror registry for Red Hat OpenShift is not a highly-available registry and only local file system storage is supported. It is not intended to replace Red Hat Quay or the internal image registry for OpenShift Container Platform. The mirror registry for Red Hat OpenShift is not intended to be a substitute for a production deployment of Red Hat Quay. The mirror registry for Red Hat OpenShift is only supported for hosting images that are required to install a disconnected OpenShift Container Platform cluster, such as Release images or Red Hat Operator images. It uses local storage on your Red Hat Enterprise Linux (RHEL) machine, and storage supported by RHEL is supported by the mirror registry for Red Hat OpenShift . Note Because the mirror registry for Red Hat OpenShift uses local storage, you should remain aware of the storage usage consumed when mirroring images and use Red Hat Quay's garbage collection feature to mitigate potential issues. For more information about this feature, see "Red Hat Quay garbage collection". Support for Red Hat product images that are pushed to the mirror registry for Red Hat OpenShift for bootstrapping purposes are covered by valid subscriptions for each respective product. A list of exceptions to further enable the bootstrap experience can be found on the Self-managed Red Hat OpenShift sizing and subscription guide . Content built by customers should not be hosted by the mirror registry for Red Hat OpenShift . Using the mirror registry for Red Hat OpenShift with more than one cluster is discouraged because multiple clusters can create a single point of failure when updating your cluster fleet. It is advised to leverage the mirror registry for Red Hat OpenShift to install a cluster that can host a production-grade, highly-available registry such as Red Hat Quay, which can serve OpenShift Container Platform content to other clusters. 2.3. Mirroring on a local host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a local host using the mirror-registry installer tool. By doing so, users can create a local host registry running on port 443 for the purpose of storing a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a USDHOME/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the registry by running the following command: USD podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 2.4. Updating mirror registry for Red Hat OpenShift from a local host This procedure explains how to update the mirror registry for Red Hat OpenShift from a local host using the upgrade command. Updating to the latest version ensures new features, bug fixes, and security vulnerability fixes. Important When upgrading from version 1 to version 2, be aware of the following constraints: The worker count is set to 1 because multiple writes are not allowed in SQLite. You must not use the mirror registry for Red Hat OpenShift user interface (UP). Do not access the sqlite-storage Podman volume during the upgrade. There is intermittent downtime of your mirror registry because it is restarted during the upgrade process. PostgreSQL data is backed up under the /USDHOME/quay-instal/quay-postgres-backup/ directory for recovery. Prerequisites You have installed the mirror registry for Red Hat OpenShift on a local host. Procedure If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y, and your installation directory is the default at /etc/quay-install , you can enter the following command: USD sudo ./mirror-registry upgrade -v Note mirror registry for Red Hat OpenShift migrates Podman volumes for Quay storage, Postgres data, and /etc/quay-install data to the new USDHOME/quay-install location. This allows you to use mirror registry for Red Hat OpenShift without the --quayRoot flag during future upgrades. Users who upgrade mirror registry for Red Hat OpenShift with the ./mirror-registry upgrade -v flag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with --quayHostname <host_example_com> and --quayRoot <example_directory_name> , you must include that string to properly upgrade the mirror registry. If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y and you used a custom quay configuration and storage directory in your 1.y deployment, you must pass in the --quayRoot and --quayStorage flags. For example: USD sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --quayStorage <example_directory_name>/quay-storage -v If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y and want to specify a custom SQLite storage path, you must pass in the --sqliteStorage flag, for example: USD sudo ./mirror-registry upgrade --sqliteStorage <example_directory_name>/sqlite-storage -v 2.5. Mirroring on a remote host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a remote host using the mirror-registry tool. By doing so, users can create a registry to hold a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a USDHOME/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD ./mirror-registry install -v \ --targetHostname <host_example_com> \ --targetUsername <example_user> \ -k ~/.ssh/my_ssh_key \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the mirror registry by running the following command: USD podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 2.6. Updating mirror registry for Red Hat OpenShift from a remote host This procedure explains how to update the mirror registry for Red Hat OpenShift from a remote host using the upgrade command. Updating to the latest version ensures bug fixes and security vulnerability fixes. Important When upgrading from version 1 to version 2, be aware of the following constraints: The worker count is set to 1 because multiple writes are not allowed in SQLite. You must not use the mirror registry for Red Hat OpenShift user interface (UP). Do not access the sqlite-storage Podman volume during the upgrade. There is intermittent downtime of your mirror registry because it is restarted during the upgrade process. PostgreSQL data is backed up under the /USDHOME/quay-instal/quay-postgres-backup/ directory for recovery. Prerequisites You have installed the mirror registry for Red Hat OpenShift on a remote host. Procedure To upgrade the mirror registry for Red Hat OpenShift from a remote host, enter the following command: USD ./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key Note Users who upgrade the mirror registry for Red Hat OpenShift with the ./mirror-registry upgrade -v flag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with --quayHostname <host_example_com> and --quayRoot <example_directory_name> , you must include that string to properly upgrade the mirror registry. If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y and want to specify a custom SQLite storage path, you must pass in the --sqliteStorage flag, for example: USD ./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key --sqliteStorage <example_directory_name>/quay-storage 2.7. Replacing mirror registry for Red Hat OpenShift SSL/TLS certificates In some cases, you might want to update your SSL/TLS certificates for the mirror registry for Red Hat OpenShift . This is useful in the following scenarios: If you are replacing the current mirror registry for Red Hat OpenShift certificate. If you are using the same certificate as the mirror registry for Red Hat OpenShift installation. If you are periodically updating the mirror registry for Red Hat OpenShift certificate. Use the following procedure to replace mirror registry for Red Hat OpenShift SSL/TLS certificates. Prerequisites You have downloaded the ./mirror-registry binary from the OpenShift console Downloads page. Procedure Enter the following command to install the mirror registry for Red Hat OpenShift : USD ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> This installs the mirror registry for Red Hat OpenShift to the USDHOME/quay-install directory. Prepare a new certificate authority (CA) bundle and generate new ssl.key and ssl.crt key files. For more information, see Using SSL/TLS to protect connections to Red Hat Quay . Assign /USDHOME/quay-install an environment variable, for example, QUAY , by entering the following command: USD export QUAY=/USDHOME/quay-install Copy the new ssl.crt file to the /USDHOME/quay-install directory by entering the following command: USD cp ~/ssl.crt USDQUAY/quay-config Copy the new ssl.key file to the /USDHOME/quay-install directory by entering the following command: USD cp ~/ssl.key USDQUAY/quay-config Restart the quay-app application pod by entering the following command: USD systemctl --user restart quay-app 2.8. Uninstalling the mirror registry for Red Hat OpenShift You can uninstall the mirror registry for Red Hat OpenShift from your local host by running the following command: USD ./mirror-registry uninstall -v \ --quayRoot <example_directory_name> Note Deleting the mirror registry for Red Hat OpenShift will prompt the user before deletion. You can use --autoApprove to skip this prompt. Users who install the mirror registry for Red Hat OpenShift with the --quayRoot flag must include the --quayRoot flag when uninstalling. For example, if you installed the mirror registry for Red Hat OpenShift with --quayRoot example_directory_name , you must include that string to properly uninstall the mirror registry. 2.9. Mirror registry for Red Hat OpenShift flags The following flags are available for the mirror registry for Red Hat OpenShift : Flags Description --autoApprove A boolean value that disables interactive prompts. If set to true , the quayRoot directory is automatically deleted when uninstalling the mirror registry. Defaults to false if left unspecified. --initPassword The password of the init user created during Quay installation. Must be at least eight characters and contain no whitespace. --initUser string Shows the username of the initial user. Defaults to init if left unspecified. --no-color , -c Allows users to disable color sequences and propagate that to Ansible when running install, uninstall, and upgrade commands. --quayHostname The fully-qualified domain name of the mirror registry that clients will use to contact the registry. Equivalent to SERVER_HOSTNAME in the Quay config.yaml . Must resolve by DNS. Defaults to <targetHostname>:8443 if left unspecified. [1] --quayStorage The folder where Quay persistent storage data is saved. Defaults to the quay-storage Podman volume. Root privileges are required to uninstall. --quayRoot , -r The directory where container image layer and configuration data is saved, including rootCA.key , rootCA.pem , and rootCA.srl certificates. Defaults to USDHOME/quay-install if left unspecified. --sqliteStorage The folder where SQLite database data is saved. Defaults to sqlite-storage Podman volume if not specified. Root is required to uninstall. --ssh-key , -k The path of your SSH identity key. Defaults to ~/.ssh/quay_installer if left unspecified. --sslCert The path to the SSL/TLS public key / certificate. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --sslCheckSkip Skips the check for the certificate hostname against the SERVER_HOSTNAME in the config.yaml file. [2] --sslKey The path to the SSL/TLS private key used for HTTPS communication. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --targetHostname , -H The hostname of the target you want to install Quay to. Defaults to USDHOST , for example, a local host, if left unspecified. --targetUsername , -u The user on the target host which will be used for SSH. Defaults to USDUSER , for example, the current user if left unspecified. --verbose , -v Shows debug logs and Ansible playbook outputs. --version Shows the version for the mirror registry for Red Hat OpenShift . --quayHostname must be modified if the public DNS name of your system is different from the local hostname. Additionally, the --quayHostname flag does not support installation with an IP address. Installation with a hostname is required. --sslCheckSkip is used in cases when the mirror registry is set behind a proxy and the exposed hostname is different from the internal Quay hostname. It can also be used when users do not want the certificates to be validated against the provided Quay hostname during installation. 2.10. Mirror registry for Red Hat OpenShift release notes The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. These release notes track the development of the mirror registry for Red Hat OpenShift in OpenShift Container Platform. 2.10.1. Mirror registry for Red Hat OpenShift 2.0 release notes The following sections provide details for each 2.0 release of the mirror registry for Red Hat OpenShift. 2.10.1.1. Mirror registry for Red Hat OpenShift 2.0.5 Issued: 13 January 2025 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.5. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2025:0298 - mirror registry for Red Hat OpenShift 2.0.5 2.10.1.2. Mirror registry for Red Hat OpenShift 2.0.4 Issued: 06 January 2025 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.4. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2025:0033 - mirror registry for Red Hat OpenShift 2.0.4 2.10.1.3. Mirror registry for Red Hat OpenShift 2.0.3 Issued: 25 November 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.3. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:10181 - mirror registry for Red Hat OpenShift 2.0.3 2.10.1.4. Mirror registry for Red Hat OpenShift 2.0.2 Issued: 31 October 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.2. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:8370 - mirror registry for Red Hat OpenShift 2.0.2 2.10.1.5. Mirror registry for Red Hat OpenShift 2.0.1 Issued: 26 September 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.1. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:7070 - mirror registry for Red Hat OpenShift 2.0.1 2.10.1.6. Mirror registry for Red Hat OpenShift 2.0.0 Issued: 03 September 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.0. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:5277 - mirror registry for Red Hat OpenShift 2.0.0 2.10.1.6.1. New features With the release of mirror registry for Red Hat OpenShift , the internal database has been upgraded from PostgreSQL to SQLite. As a result, data is now stored on the sqlite-storage Podman volume by default, and the overall tarball size is reduced by 300 MB. New installations use SQLite by default. Before upgrading to version 2.0, see "Updating mirror registry for Red Hat OpenShift from a local host" or "Updating mirror registry for Red Hat OpenShift from a remote host" depending on your environment. A new feature flag, --sqliteStorage has been added. With this flag, you can manually set the location where SQLite database data is saved. Mirror registry for Red Hat OpenShift is now available on IBM Power and IBM Z architectures ( s390x and ppc64le ). 2.10.2. Mirror registry for Red Hat OpenShift 1.3 release notes To view the mirror registry for Red Hat OpenShift 1.3 release notes, see Mirror registry for Red Hat OpenShift 1.3 release notes . 2.10.3. Mirror registry for Red Hat OpenShift 1.2 release notes To view the mirror registry for Red Hat OpenShift 1.2 release notes, see Mirror registry for Red Hat OpenShift 1.2 release notes . 2.10.4. Mirror registry for Red Hat OpenShift 1.1 release notes To view the mirror registry for Red Hat OpenShift 1.1 release notes, see Mirror registry for Red Hat OpenShift 1.1 release notes . 2.11. Troubleshooting mirror registry for Red Hat OpenShift To assist in troubleshooting mirror registry for Red Hat OpenShift , you can gather logs of systemd services installed by the mirror registry. The following services are installed: quay-app.service quay-postgres.service quay-redis.service quay-pod.service Prerequisites You have installed mirror registry for Red Hat OpenShift . Procedure If you installed mirror registry for Red Hat OpenShift with root privileges, you can get the status information of its systemd services by entering the following command: USD sudo systemctl status <service> If you installed mirror registry for Red Hat OpenShift as a standard user, you can get the status information of its systemd services by entering the following command: USD systemctl --user status <service> 2.12. Additional resources Red Hat Quay garbage collection Using SSL to protect connections to Red Hat Quay Configuring the system to trust the certificate authority Mirroring the OpenShift Container Platform image repository Mirroring Operator catalogs for use with disconnected clusters
[ "./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "sudo ./mirror-registry upgrade -v", "sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --quayStorage <example_directory_name>/quay-storage -v", "sudo ./mirror-registry upgrade --sqliteStorage <example_directory_name>/sqlite-storage -v", "./mirror-registry install -v --targetHostname <host_example_com> --targetUsername <example_user> -k ~/.ssh/my_ssh_key --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key", "./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key --sqliteStorage <example_directory_name>/quay-storage", "./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "export QUAY=/USDHOME/quay-install", "cp ~/ssl.crt USDQUAY/quay-config", "cp ~/ssl.key USDQUAY/quay-config", "systemctl --user restart quay-app", "./mirror-registry uninstall -v --quayRoot <example_directory_name>", "sudo systemctl status <service>", "systemctl --user status <service>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/disconnected_installation_mirroring/installing-mirroring-creating-registry
Chapter 10. Worker nodes for single-node OpenShift clusters
Chapter 10. Worker nodes for single-node OpenShift clusters 10.1. Adding worker nodes to single-node OpenShift clusters Single-node OpenShift clusters reduce the host prerequisites for deployment to a single host. This is useful for deployments in constrained environments or at the network edge. However, sometimes you need to add additional capacity to your cluster, for example, in telecommunications and network edge scenarios. In these scenarios, you can add worker nodes to the single-node cluster. Note Unlike multi-node clusters, by default all ingress traffic is routed to the single control-plane node, even after adding additional worker nodes. There are several ways that you can add worker nodes to a single-node cluster. You can add worker nodes to a cluster manually, using Red Hat OpenShift Cluster Manager , or by using the Assisted Installer REST API directly. Important Adding worker nodes does not expand the cluster control plane, and it does not provide high availability to your cluster. For single-node OpenShift clusters, high availability is handled by failing over to another site. When adding worker nodes to single-node OpenShift clusters, a tested maximum of two worker nodes is recommended. Exceeding the recommended number of worker nodes might result in lower overall performance, including cluster failure. Note To add worker nodes, you must have access to the OpenShift Cluster Manager. This method is not supported when using the Agent-based installer to install a cluster in a disconnected environment. 10.1.1. Requirements for installing single-node OpenShift worker nodes To install a single-node OpenShift worker node, you must address the following requirements: Administration host: You must have a computer to prepare the ISO and to monitor the installation. Production-grade server: Installing single-node OpenShift worker nodes requires a server with sufficient resources to run OpenShift Container Platform services and a production workload. Table 10.1. Minimum resource requirements Profile vCPU Memory Storage Minimum 2 vCPU cores 8GB of RAM 100GB Note One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs The server must have a Baseboard Management Controller (BMC) when booting with virtual media. Networking: The worker node server must have access to the internet or access to a local registry if it is not connected to a routable network. The worker node server must have a DHCP reservation or a static IP address and be able to access the single-node OpenShift cluster Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN) for the single-node OpenShift cluster: Table 10.2. Required DNS records Usage FQDN Description Kubernetes API api.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record. This record must be resolvable by clients external to the cluster. Internal API api-int.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster. Ingress route *.apps.<cluster_name>.<base_domain> Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by clients external to the cluster. Without persistent IP addresses, communications between the apiserver and etcd might fail. Additional resources Minimum resource requirements for cluster installation Recommended practices for scaling the cluster User-provisioned DNS requirements Creating a bootable ISO image on a USB drive Booting from an ISO image served over HTTP using the Redfish API Deleting nodes from a cluster 10.1.2. Adding worker nodes using the Assisted Installer and OpenShift Cluster Manager You can add worker nodes to single-node OpenShift clusters that were created on Red Hat OpenShift Cluster Manager using the Assisted Installer . Important Adding worker nodes to single-node OpenShift clusters is only supported for clusters running OpenShift Container Platform version 4.11 and up. Prerequisites Have access to a single-node OpenShift cluster installed using Assisted Installer . Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. Procedure Log in to OpenShift Cluster Manager and click the single-node cluster that you want to add a worker node to. Click Add hosts , and download the discovery ISO for the new worker node, adding SSH public key and configuring cluster-wide proxy settings as required. Boot the target host using the discovery ISO, and wait for the host to be discovered in the console. After the host is discovered, start the installation. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node. When prompted, approve the pending CSRs to complete the installation. When the worker node is sucessfully installed, it is listed as a worker node in the cluster web console. Important New worker nodes will be encrypted using the same method as the original cluster. Additional resources User-provisioned DNS requirements Approving the certificate signing requests for your machines 10.1.3. Adding worker nodes using the Assisted Installer API You can add worker nodes to single-node OpenShift clusters using the Assisted Installer REST API. Before you add worker nodes, you must log in to OpenShift Cluster Manager and authenticate against the API. 10.1.3.1. Authenticating against the Assisted Installer REST API Before you can use the Assisted Installer REST API, you must authenticate against the API using a JSON web token (JWT) that you generate. Prerequisites Log in to OpenShift Cluster Manager as a user with cluster creation privileges. Install jq . Procedure Log in to OpenShift Cluster Manager and copy your API token. Set the USDOFFLINE_TOKEN variable using the copied API token by running the following command: USD export OFFLINE_TOKEN=<copied_api_token> Set the USDJWT_TOKEN variable using the previously set USDOFFLINE_TOKEN variable: USD export JWT_TOKEN=USD( curl \ --silent \ --header "Accept: application/json" \ --header "Content-Type: application/x-www-form-urlencoded" \ --data-urlencode "grant_type=refresh_token" \ --data-urlencode "client_id=cloud-services" \ --data-urlencode "refresh_token=USD{OFFLINE_TOKEN}" \ "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \ | jq --raw-output ".access_token" ) Note The JWT token is valid for 15 minutes only. Verification Optional: Check that you can access the API by running the following command: USD curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H "Authorization: Bearer USD{JWT_TOKEN}" | jq Example output { "release_tag": "v2.5.1", "versions": { "assisted-installer": "registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-175", "assisted-installer-controller": "registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-223", "assisted-installer-service": "quay.io/app-sre/assisted-service:ac87f93", "discovery-agent": "registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-156" } } 10.1.3.2. Adding worker nodes using the Assisted Installer REST API You can add worker nodes to clusters using the Assisted Installer REST API. Prerequisites Install the OpenShift Cluster Manager CLI ( ocm ). Log in to OpenShift Cluster Manager as a user with cluster creation privileges. Install jq . Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. Procedure Authenticate against the Assisted Installer REST API and generate a JSON web token (JWT) for your session. The generated JWT token is valid for 15 minutes only. Set the USDAPI_URL variable by running the following command: USD export API_URL=<api_url> 1 1 Replace <api_url> with the Assisted Installer API URL, for example, https://api.openshift.com Import the single-node OpenShift cluster by running the following commands: Set the USDOPENSHIFT_CLUSTER_ID variable. Log in to the cluster and run the following command: USD export OPENSHIFT_CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}') Set the USDCLUSTER_REQUEST variable that is used to import the cluster: USD export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id "USDOPENSHIFT_CLUSTER_ID" '{ "api_vip_dnsname": "<api_vip>", 1 "openshift_cluster_id": USDopenshift_cluster_id, "name": "<openshift_cluster_name>" 2 }') 1 Replace <api_vip> with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the worker node can reach. For example, api.compute-1.example.com . 2 Replace <openshift_cluster_name> with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation. Import the cluster and set the USDCLUSTER_ID variable. Run the following command: USD CLUSTER_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer USD{JWT_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \ -d "USDCLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id') Generate the InfraEnv resource for the cluster and set the USDINFRA_ENV_ID variable by running the following commands: Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com . Set the USDINFRA_ENV_REQUEST variable: export INFRA_ENV_REQUEST=USD(jq --null-input \ --slurpfile pull_secret <path_to_pull_secret_file> \ 1 --arg ssh_pub_key "USD(cat <path_to_ssh_pub_key>)" \ 2 --arg cluster_id "USDCLUSTER_ID" '{ "name": "<infraenv_name>", 3 "pull_secret": USDpull_secret[0] | tojson, "cluster_id": USDcluster_id, "ssh_authorized_key": USDssh_pub_key, "image_type": "<iso_image_type>" 4 }') 1 Replace <path_to_pull_secret_file> with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at console.redhat.com . 2 Replace <path_to_ssh_pub_key> with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode. 3 Replace <infraenv_name> with the plain text name for the InfraEnv resource. 4 Replace <iso_image_type> with the ISO image type, either full-iso or minimal-iso . Post the USDINFRA_ENV_REQUEST to the /v2/infra-envs API and set the USDINFRA_ENV_ID variable: USD INFRA_ENV_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer USD{JWT_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "USDINFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id') Get the URL of the discovery ISO for the cluster worker node by running the following command: USD curl -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID" -H "Authorization: Bearer USD{JWT_TOKEN}" | jq -r '.download_url' Example output https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=USDVERSION Download the ISO: USD curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1 1 Replace <iso_url> with the URL for the ISO from the step. Boot the new worker host from the downloaded rhcos-live-minimal.iso . Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{JWT_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id' Example output 2294ba03-c264-4f11-ac08-2f1bb2f8c296 Set the USDHOST_ID variable for the new worker node, for example: USD HOST_ID=<host_id> 1 1 Replace <host_id> with the host ID from the step. Check that the host is ready to install by running the following command: Note Ensure that you copy the entire command including the complete jq expression. USD curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H "Authorization: Bearer USD{JWT_TOKEN}" | jq ' def host_name(USDhost): if (.suggested_hostname // "") == "" then if (.inventory // "") == "" then "Unknown hostname, please wait" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): ["failure", "pending", "error"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // "{}" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { "Hosts validations": { "Hosts": [ .hosts[] | select(.status != "installed") | { "id": .id, "name": host_name(.), "status": .status, "notable_validations": notable_validations(.validations_info) } ] }, "Cluster validations info": { "notable_validations": notable_validations(.validations_info) } } ' -r Example output { "Hosts validations": { "Hosts": [ { "id": "97ec378c-3568-460c-bc22-df54534ff08f", "name": "localhost.localdomain", "status": "insufficient", "notable_validations": [ { "id": "ntp-synced", "status": "failure", "message": "Host couldn't synchronize with any NTP server" }, { "id": "api-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "api-int-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "apps-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" } ] } ] }, "Cluster validations info": { "notable_validations": [] } } When the command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command: USD curl -X POST -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install" -H "Authorization: Bearer USD{JWT_TOKEN}" As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node. Important You must approve the CSRs to complete the installation. Keep running the following API call to monitor the cluster installation: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{JWT_TOKEN}" | jq '{ "Cluster day-2 hosts": [ .hosts[] | select(.status != "installed") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }' Example output { "Cluster day-2 hosts": [ { "id": "a1c52dde-3432-4f59-b2ae-0a530c851480", "requested_hostname": "control-plane-1", "status": "added-to-existing-cluster", "status_info": "Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs", "progress": { "current_stage": "Done", "installation_percentage": 100, "stage_started_at": "2022-07-08T10:56:20.476Z", "stage_updated_at": "2022-07-08T10:56:20.476Z" }, "status_updated_at": "2022-07-08T10:56:20.476Z", "updated_at": "2022-07-08T10:57:15.306369Z", "infra_env_id": "b74ec0c3-d5b5-4717-a866-5b6854791bd3", "cluster_id": "8f721322-419d-4eed-aa5b-61b50ea586ae", "created_at": "2022-07-06T22:54:57.161614Z" } ] } Optional: Run the following command to see all the events for the cluster: USD curl -s "USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID" -H "Authorization: Bearer USD{JWT_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}' Example output {"severity":"info","message":"Host compute-0: updated status from insufficient to known (Host is ready to be installed)","event_time":"2022-07-08T11:21:46.346Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from known to installing (Installation is in progress)","event_time":"2022-07-08T11:28:28.647Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing to installing-in-progress (Starting installation)","event_time":"2022-07-08T11:28:52.068Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae","event_time":"2022-07-08T11:29:47.802Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)","event_time":"2022-07-08T11:29:48.259Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host: compute-0, reached installation stage Rebooting","event_time":"2022-07-08T11:29:48.261Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} Log in to the cluster and approve the pending CSRs to complete the installation. Verification Check that the new worker node was successfully added to the cluster with a status of Ready : USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.28.5 compute-1.example.com Ready worker 11m v1.28.5 Additional resources User-provisioned DNS requirements Approving the certificate signing requests for your machines 10.1.4. Adding worker nodes to single-node OpenShift clusters manually You can add a worker node to a single-node OpenShift cluster manually by booting the worker node from Red Hat Enterprise Linux CoreOS (RHCOS) ISO and by using the cluster worker.ign file to join the new worker node to the cluster. Prerequisites Install a single-node OpenShift cluster on bare metal. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. Procedure Set the OpenShift Container Platform version: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version, for example, latest-4.15 Set the host architecture: USD ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture, for example, aarch64 or x86_64 . Get the worker.ign data from the running single-node cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Host the worker.ign file on a web server accessible from your network. Download the OpenShift Container Platform installer and make it available for use by running the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Retrieve the RHCOS ISO URL: USD ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\" -f4) Download the RHCOS ISO: USD curl -L USDISO_URL -o rhcos-live.iso Use the RHCOS ISO and the hosted worker.ign file to install the worker node: Boot the target host with the RHCOS ISO and your preferred method of installation. When the target host has booted from the RHCOS ISO, open a console on the target host. If your local network does not have DHCP enabled, you need to create an ignition file with the new hostname and configure the worker node static IP address before running the RHCOS installation. Perform the following steps: Configure the worker host network connection with a static IP. Run the following command on the target host console: USD nmcli con mod <network_interface> ipv4.method manual / ipv4.addresses <static_ip> ipv4.gateway <network_gateway> ipv4.dns <dns_server> / 802-3-ethernet.mtu 9000 where: <static_ip> Is the host static IP address and CIDR, for example, 10.1.101.50/24 <network_gateway> Is the network gateway, for example, 10.1.101.1 Activate the modified network interface: USD nmcli con up <network_interface> Create a new ignition file new-worker.ign that includes a reference to the original worker.ign and an additional instruction that the coreos-installer program uses to populate the /etc/hostname file on the new worker host. For example: { "ignition":{ "version":"3.2.0", "config":{ "merge":[ { "source":"<hosted_worker_ign_file>" 1 } ] } }, "storage":{ "files":[ { "path":"/etc/hostname", "contents":{ "source":"data:,<new_fqdn>" 2 }, "mode":420, "overwrite":true, "path":"/etc/hostname" } ] } } 1 <hosted_worker_ign_file> is the locally accessible URL for the original worker.ign file. For example, http://webserver.example.com/worker.ign 2 <new_fqdn> is the new FQDN that you set for the worker node. For example, new-worker.example.com . Host the new-worker.ign file on a web server accessible from your network. Run the following coreos-installer command, passing in the ignition-url and hard disk details: USD sudo coreos-installer install --copy-network / --ignition-url=<new_worker_ign_file> <hard_disk> --insecure-ignition where: <new_worker_ign_file> is the locally accessible URL for the hosted new-worker.ign file, for example, http://webserver.example.com/new-worker.ign <hard_disk> Is the hard disk where you install RHCOS, for example, /dev/sda For networks that have DHCP enabled, you do not need to set a static IP. Run the following coreos-installer command from the target host console to install the system: USD coreos-installer install --ignition-url=<hosted_worker_ign_file> <hard_disk> To manually enable DHCP, apply the following NMStateConfig CR to the single-node OpenShift cluster: apiVersion: agent-install.openshift.io/v1 kind: NMStateConfig metadata: name: nmstateconfig-dhcp namespace: example-sno labels: nmstate_config_cluster_name: <nmstate_config_cluster_label> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: "eth0" macAddress: "AA:BB:CC:DD:EE:11" Important The NMStateConfig CR is required for successful deployments of worker nodes with static IP addresses and for adding a worker node with a dynamic IP address if the single-node OpenShift was deployed with a static IP address. The cluster network DHCP does not automatically set these network settings for the new worker node. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the worker node. When prompted, approve the pending CSRs to complete the installation. When the install is complete, reboot the host. The host joins the cluster as a new worker node. Verification Check that the new worker node was successfully added to the cluster with a status of Ready : USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.28.5 compute-1.example.com Ready worker 11m v1.28.5 Additional resources User-provisioned DNS requirements Approving the certificate signing requests for your machines 10.1.5. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests .
[ "export OFFLINE_TOKEN=<copied_api_token>", "export JWT_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )", "curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq", "{ \"release_tag\": \"v2.5.1\", \"versions\": { \"assisted-installer\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-175\", \"assisted-installer-controller\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-223\", \"assisted-installer-service\": \"quay.io/app-sre/assisted-service:ac87f93\", \"discovery-agent\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-156\" } }", "export API_URL=<api_url> 1", "export OPENSHIFT_CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')", "export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDOPENSHIFT_CLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": USDopenshift_cluster_id, \"name\": \"<openshift_cluster_name>\" 2 }')", "CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')", "export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')", "INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')", "curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.download_url'", "https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=USDVERSION", "curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1", "curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'", "2294ba03-c264-4f11-ac08-2f1bb2f8c296", "HOST_ID=<host_id> 1", "curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r", "{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }", "curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{JWT_TOKEN}\"", "curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'", "{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }", "curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'", "{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.28.5 compute-1.example.com Ready worker 11m v1.28.5", "OCP_VERSION=<ocp_version> 1", "ARCH=<architecture> 1", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)", "curl -L USDISO_URL -o rhcos-live.iso", "nmcli con mod <network_interface> ipv4.method manual / ipv4.addresses <static_ip> ipv4.gateway <network_gateway> ipv4.dns <dns_server> / 802-3-ethernet.mtu 9000", "nmcli con up <network_interface>", "{ \"ignition\":{ \"version\":\"3.2.0\", \"config\":{ \"merge\":[ { \"source\":\"<hosted_worker_ign_file>\" 1 } ] } }, \"storage\":{ \"files\":[ { \"path\":\"/etc/hostname\", \"contents\":{ \"source\":\"data:,<new_fqdn>\" 2 }, \"mode\":420, \"overwrite\":true, \"path\":\"/etc/hostname\" } ] } }", "sudo coreos-installer install --copy-network / --ignition-url=<new_worker_ign_file> <hard_disk> --insecure-ignition", "coreos-installer install --ignition-url=<hosted_worker_ign_file> <hard_disk>", "apiVersion: agent-install.openshift.io/v1 kind: NMStateConfig metadata: name: nmstateconfig-dhcp namespace: example-sno labels: nmstate_config_cluster_name: <nmstate_config_cluster_label> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"eth0\" macAddress: \"AA:BB:CC:DD:EE:11\"", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.28.5 compute-1.example.com Ready worker 11m v1.28.5", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/nodes/worker-nodes-for-single-node-openshift-clusters
Appendix F. Reference Table for ext4 and XFS Commands
Appendix F. Reference Table for ext4 and XFS Commands XFS replaces ext4 as the default file system in Red Hat Enterprise Linux 7. This table serves as a cross reference listing common file system manipulation tasks and any changes in these commands between ext4 and XFS. Table F.1. Reference Table for ext4 and XFS Commands Task ext4 XFS Creating a file system mkfs.ext4 mkfs.xfs Mounting a file system mount mount Resizing a file system resize2fs xfs_growfs [a] Repairing a file system e2fsck xfs_repair Changing the label on a file system e2label xfs_admin -L Reporting on disk space and file usage quota quota Debugging a file system debugfs xfs_db Saving critical file system metadata to a file e2image xfs_metadump [a] The size of XFS file systems cannot be reduced; the command is used only to increase the size.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/appe-ext4-to-xfs-command-reference
Chapter 22. Subsystem Control And maintenance
Chapter 22. Subsystem Control And maintenance This chapter provides information on how to control (start, stop, restart, and status check) a Red Hat Certificate System subsystem, as well as general maintenance (health check) recommendation. 22.1. Starting, Stopping, Restarting, and Obtaining Status Red Hat Certificate System subsystem instances can be stopped and started using the systemctl utility on Red Hat Enterprise Linux 8. Note You can also use the pki-server alias to start and stop instances: pki-server <command> <instance> is an alias to systemctl <command> pki-tomcatd@<instance>.service. . To start an instance: To stop an instance: To restart an instance: To display the status of an instance: unit_file has one of the following values: pki-tomcat : With watchdog disabled pki-tomcat-nuxwdog : With watchdog enabled
[ "systemctl start unit_file @ instance_name .service", "pki-server start instance_name", "systemctl stop unit_file @ instance_name .service", "pki-server stop instance_name", "systemctl restart unit_file @ instance_name .service", "pki-server restart instance_name", "systemctl status unit_file @ instance_name .service" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/subsystem-control-and-maintainance
Chapter 1. Enabling desktop icons
Chapter 1. Enabling desktop icons You can enable the desktop icons functionality and move files to the desktop. 1.1. Desktop icons in RHEL 9 Desktop icons are provided by the Desktop icons GNOME Shell extension, which is available from the gnome-shell-extension-desktop-icons package. Desktop icons in GNOME Classic The GNOME Classic environment includes the gnome-shell-extension-desktop-icons package by default. Desktop icons are always on, and you cannot turn them off. Desktop icons in GNOME Standard In GNOME Standard, desktop icons are disabled by default. To enable desktop icons in the GNOME Standard environment, you must install the gnome-shell-extension-desktop-icons package. 1.2. Enabling desktop icons in GNOME Standard This procedure enables the desktop icons functionality in the GNOME Standard environment. Prerequisites The Extensions application is installed on the system: Procedure Open the Extensions application. Enable the Desktop Icons extension. 1.3. Creating a desktop icon for a file This procedure creates a desktop icon for an existing file. Prerequisites The Desktop icons extension is enabled. Procedure Move the selected file into the ~/Desktop/ directory. Verification Check that the icon for the file appears on the desktop.
[ "dnf install gnome-shell-extension-desktop-icons" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/customizing_the_gnome_desktop_environment/assembly_enabling-desktop-icons_customizing-the-gnome-desktop-environment
Chapter 3. Upgrading JBoss EAP
Chapter 3. Upgrading JBoss EAP Learn how to upgrade from one JBoss EAP 7 minor release to another. For example, upgrading from JBoss EAP 7.0 to JBoss EAP 7.1. Important If you are migrating from an earlier major release of JBoss EAP, for example, from JBoss EAP 6 to JBoss EAP 7, see the Migration Guide . 3.1. Preparing for an upgrade Before you upgrade JBoss EAP, you need to be aware of the following potential issues: If you back up and restore your configuration files when upgrading to newer releases of JBoss EAP, you might overwrite the new release's configurations. This can disable new features in your upgraded JBoss EAP instance. Compare the old configuration to the new configuration, and only reapply specific configurations you need to keep. You can do this manually, or by creating a script that can apply the changes consistently to multiple server configuration files. If you back up and restore an existing configuration for migration to a newer JBoss EAP release, a server restart updates the configuration files. These configurations files would no longer be compatible with the JBoss EAP release. The upgrade might remove temporary folders. Back up any deployments stored in the data/content/ directory prior to starting the upgrade. You can restore the directory content after the upgrade. Otherwise, the version of the JBoss EAP server does not start because of the missing content. Before to applying the upgrade, handle any open transactions and delete the data/tx-object-store/ transaction directory. Check the persistent timer data in the data/timer-service-data directory to determine whether the data applies to the upgrade. Before the upgrade, review the deployment-* files in data directory to determine active timers. 3.2. Upgrading an archive installer installation You can upgrade JBoss EAP by downloading, decompressing, and installing a new version of a JBoss EAP release. Prerequisites Ensure that the base operating system is up-to-date. Back up all configuration files, deployments, and user data. Download the compressed file of the target JBoss EAP version. Important For a managed domain, upgrade the JBoss EAP domain controller before you upgrade to a newer release of JBoss EAP. An upgraded JBoss EAP 7 domain controller can manage other JBoss EAP 7 hosts in a managed domain, provided the domain controller runs in the same version or a more recent version of JBoss EAP that the rest of the managed domain. Procedure Move the downloaded compressed file to any location other than the location of the existing JBoss EAP installation. Note If you want to install the upgraded version of JBoss EAP to the same directory as the existing installation, you will need to move the existing installation to a different location before proceeding. This prevents the loss of modified configuration files, deployments, and upgrades. Extract the compressed file to install a clean instance of the new JBoss EAP release Copy the EAP_HOME /domain/ and EAP_HOME /standalone/ directories from the installation over the new installation directories. Important You must compare and update configuration files from the JBoss EAP version with files in the new version of JBoss EAP, because files copied from the release might not have features from the new release enabled by default. Review the changes made to the bin directory of the installation, and and apply the changes to the bin directory of the new release. Warning Do not overwrite the files in the bin directory of your JBoss EAP version. You must make changes manually. Review the remaining modified files from the installation, and move these changes into the new installation. These files might include: The welcome-content directory. Custom modules in the modules directory. Optional: If JBoss EAP was previously configured to run as a service, remove the existing service and configure a new service for the new installation. 3.3. Upgrading an RPM installation Before upgrading your current JBoss EAP instance with a new JBoss EAP instance by using the RPM installation method, check that your system meets certain setup prerequisites. Prerequisites The base operating system is up to date, and you get updates from the standard Red Hat Enterprise Linux repositories. You are subscribed to the relevant JBoss EAP repository for the upgrade. If you are subscribed to a minor JBoss EAP repository, you have changed to the latest minor repository to get the upgrade. Important For a managed domain, upgrade the JBoss EAP domain controller before you upgrade to a newer release of JBoss EAP. An upgraded JBoss EAP 7 domain controller can still manage other JBoss EAP 7 hosts in a managed domain, as long as the domain controller is running the same or more recent version than the rest of the domain. Procedure Upgrade your current JBoss EAP version to the newer JBoss EAP version by issuing the following command in your terminal: Enable new features in the upgraded release, such as new subsystems, by manually merging each .rpmnew file into your existing configuration files. The RPM upgrade process does not replace any of your modified JBoss EAP configuration files, but it creates .rpmnew files based on the default configuration of your upgraded JBoss EAP instance. Additional resources For more information on JBoss EAP repositories, see the information on choosing a JBoss EAP repository and changing JBoss EAP repositories in the Installation Guide . 3.4. Upgrading a cluster JBoss EAP does not support the creation of clusters where the nodes include different versions of JBoss EAP servers. All nodes within a cluster must must relate to the JBoss EAP version. Procedure Create a new JBoss EAP cluster that comprises of nodes running the newest JBoss EAP version. Migrate all clustered traffic from your JBoss EAP release to a new cluster on your upgraded JBoss EAP release. Shut down your cluster related to the older JBoss EAP version and then remove its content. Additional resources For information about creating a new cluster, see configuring high availability clusters in the Configuration Guide . For information about how to migrate traffic from an old cluster to a new one, see migrating traffic between clusters .
[ "yum update" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/patching_and_upgrading_guide/assembly-upgrading-jboss-eap_default
8.7. About Replication Guarantees
8.7. About Replication Guarantees In a clustered cache, the user can receive synchronous replication guarantees as well as the parallelism associated with asynchronous replication. Red Hat JBoss Data Grid provides an asynchronous API for this purpose. The asynchronous methods used in the API return Futures, which can be queried. The queries block the thread until a confirmation is received about the success of any network calls used. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/about_replication_guarantees
Chapter 2. Creating an application by using the GitOps CLI
Chapter 2. Creating an application by using the GitOps CLI With Argo CD, you can create your applications on an OpenShift Container Platform cluster by using the GitOps argocd CLI. 2.1. Creating an application in the default mode by using the GitOps CLI You can create applications in the default mode by using the GitOps argocd CLI. Prerequisites You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift GitOps argocd CLI. You have logged in to Argo CD instance. Procedure Get the admin account password for the Argo CD server: USD ADMIN_PASSWD=USD(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d) Get the Argo CD server URL: USD SERVER_URL=USD(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}') Log in to the Argo CD server by using the admin account password and enclosing it in single quotes: Important Enclosing the password in single quotes ensures that special characters, such as USD , are not misinterpreted by the shell. Always use single quotes to enclose the literal value of the password. USD argocd login --username admin --password USD{ADMIN_PASSWD} USD{SERVER_URL} Example USD argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing Verify that you are able to run argocd commands in the default mode by listing all applications: USD argocd app list If the configuration is correct, then existing applications will be listed with the following header: Sample output NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET Create an application in the default mode: USD argocd app create app-spring-petclinic \ --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git \ --path app \ --revision main \ --dest-server https://kubernetes.default.svc \ --dest-namespace spring-petclinic \ --directory-recurse \ --sync-policy automated \ --self-heal \ --sync-option Prune=true \ --sync-option CreateNamespace=true Label the spring-petclinic destination namespace to be managed by the openshif-gitops Argo CD instance: USD oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops" List the available applications to confirm that the application is created successfully and repeat the command until the application has the Healthy and Synced statuses: USD argocd app list 2.2. Creating an application in core mode by using the GitOps CLI You can create applications in core mode by using the GitOps argocd CLI. Prerequisites You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift GitOps argocd CLI. Procedure Log in to the OpenShift Container Platform cluster by using the oc CLI tool: USD oc login -u <username> -p <password> <server_url> Example USD oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443 Check whether the context is set correctly in the kubeconfig file: USD oc config current-context Set the default namespace of the current context to openshift-gitops : USD oc config set-context --current --namespace openshift-gitops Set the following environment variable to override the Argo CD component names: USD export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server Verify that you are able to run argocd commands in core mode by listing all applications: USD argocd app list --core If the configuration is correct, then existing applications will be listed with the following header: Sample output NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET Create an application in core mode: USD argocd app create app-spring-petclinic --core \ --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git \ --path app \ --revision main \ --dest-server https://kubernetes.default.svc \ --dest-namespace spring-petclinic \ --directory-recurse \ --sync-policy automated \ --self-heal \ --sync-option Prune=true \ --sync-option CreateNamespace=true Label the spring-petclinic destination namespace to be managed by the openshif-gitops Argo CD instance: USD oc label ns spring-petclinic "argocd.argoproj.io/managed-by=openshift-gitops" List the available applications to confirm that the application is created successfully and repeat the command until the application has the Healthy and Synced statuses: USD argocd app list --core 2.3. Additional resources Installing the GitOps CLI Basic GitOps argocd commands
[ "ADMIN_PASSWD=USD(oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\\.password}' | base64 -d)", "SERVER_URL=USD(oc get routes openshift-gitops-server -n openshift-gitops -o jsonpath='{.status.ingress[0].host}')", "argocd login --username admin --password USD{ADMIN_PASSWD} USD{SERVER_URL}", "argocd login --username admin --password '<password>' openshift-gitops.openshift-gitops.apps-crc.testing", "argocd app list", "NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET", "argocd app create app-spring-petclinic --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git --path app --revision main --dest-server https://kubernetes.default.svc --dest-namespace spring-petclinic --directory-recurse --sync-policy automated --self-heal --sync-option Prune=true --sync-option CreateNamespace=true", "oc label ns spring-petclinic \"argocd.argoproj.io/managed-by=openshift-gitops\"", "argocd app list", "oc login -u <username> -p <password> <server_url>", "oc login -u kubeadmin -p '<password>' https://api.crc.testing:6443", "oc config current-context", "oc config set-context --current --namespace openshift-gitops", "export ARGOCD_REPO_SERVER_NAME=openshift-gitops-repo-server", "argocd app list --core", "NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET", "argocd app create app-spring-petclinic --core --repo https://github.com/redhat-developer/openshift-gitops-getting-started.git --path app --revision main --dest-server https://kubernetes.default.svc --dest-namespace spring-petclinic --directory-recurse --sync-policy automated --self-heal --sync-option Prune=true --sync-option CreateNamespace=true", "oc label ns spring-petclinic \"argocd.argoproj.io/managed-by=openshift-gitops\"", "argocd app list --core" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/argo_cd_applications/creating-an-application-using-gitops-argocd-cli
Chapter 2. Administering hosts
Chapter 2. Administering hosts This chapter describes creating, registering, administering, and removing hosts. 2.1. Creating a host in Red Hat Satellite Use this procedure to create a host in Red Hat Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . On the Host tab, enter the required details. Click the Ansible Roles tab, and from the Ansible Roles list, select one or more roles that you want to add to the host. Use the arrow icon to manage the roles that you add or remove. On the Puppet Classes tab, select the Puppet classes you want to include. On the Interfaces tab: For each interface, click Edit in the Actions column and configure the following settings as required: Type - For a Bond or BMC interface, use the Type list and select the interface type. MAC address - Enter the MAC address. DNS name - Enter the DNS name that is known to the DNS server. This is used for the host part of the FQDN. Domain - Select the domain name of the provisioning network. This automatically updates the Subnet list with a selection of suitable subnets. IPv4 Subnet - Select an IPv4 subnet for the host from the list. IPv6 Subnet - Select an IPv6 subnet for the host from the list. IPv4 address - If IP address management (IPAM) is enabled for the subnet, the IP address is automatically suggested. Alternatively, you can enter an address. The address can be omitted if provisioning tokens are enabled, if the domain does not manage DNS, if the subnet does not manage reverse DNS, or if the subnet does not manage DHCP reservations. IPv6 address - If IP address management (IPAM) is enabled for the subnet, the IP address is automatically suggested. Alternatively, you can enter an address. Managed - Select this checkbox to configure the interface during provisioning to use the Capsule provided DHCP and DNS services. Primary - Select this checkbox to use the DNS name from this interface as the host portion of the FQDN. Provision - Select this checkbox to use this interface for provisioning. This means TFTP boot will take place using this interface, or in case of image based provisioning, the script to complete the provisioning will be executed through this interface. Note that many provisioning tasks, such as downloading packages by anaconda or Puppet setup in a %post script, will use the primary interface. Virtual NIC - Select this checkbox if this interface is not a physical device. This setting has two options: Tag - Optionally set a VLAN tag. If unset, the tag will be the VLAN ID of the subnet. Attached to - Enter the device name of the interface this virtual interface is attached to. Click OK to save the interface configuration. Optionally, click Add Interface to include an additional network interface. For more information, see Chapter 5, Adding network interfaces . Click Submit to apply the changes and exit. On the Operating System tab, enter the required details. For Red Hat operating systems, select Synced Content for Media Selection . If you want to use non Red Hat operating systems, select All Media , then select the installation media from the Media Selection list. You can select a partition table from the list or enter a custom partition table in the Custom partition table field. You cannot specify both. On the Parameters tab, click Add Parameter to add any parameter variables that you want to pass to job templates at run time. This includes all Puppet Class, Ansible Playbook parameters and host parameters that you want to associate with the host. To use a parameter variable with an Ansible job template, you must add a Host Parameter . When you create a host, you can set system purpose attributes. System purpose attributes help determine which repositories are available on the host. System purpose attributes also help with reporting in the Subscriptions service of the Red Hat Hybrid Cloud Console. In the Host Parameters area, enter the following parameter names with the corresponding values. For the list of values, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . syspurpose_role syspurpose_sla syspurpose_usage syspurpose_addons If you want to create a host with pull mode for remote job execution, add the enable-remote-execution-pull parameter with type boolean set to true . For more information, see Section 13.4, "Transport modes for remote execution" . On the Additional Information tab, enter additional information about the host. Click Submit to complete your provisioning request. CLI procedure To create a host associated to a host group, enter the following command: This command prompts you to specify the root password. It is required to specify the host's IP and MAC address. Other properties of the primary network interface can be inherited from the host group or set using the --subnet , and --domain parameters. You can set additional interfaces using the --interface option, which accepts a list of key-value pairs. For the list of available interface settings, enter the hammer host create --help command. 2.2. Cloning hosts You can clone existing hosts. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . In the Actions menu, click Clone . On the Host tab, ensure to provide a Name different from the original host. On the Interfaces tab, ensure to provide a different IP address. Click Submit to clone the host. For more information, see Section 2.1, "Creating a host in Red Hat Satellite" . 2.3. Associating a virtual machine with Satellite from a hypervisor Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select a compute resource. On the Virtual Machines tab, click Associate VMs to associate all VMs or select Associate VM from the Actions menu to associate a single VM. 2.4. Editing the system purpose of a host You can edit the system purpose attributes for a Red Hat Enterprise Linux host. System purpose allows you to set the intended use of a system on your network and improves reporting accuracy in the Subscriptions service of the Red Hat Hybrid Cloud Console. For more information about system purpose, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . Prerequisites The host that you want to edit must be registered with the subscription-manager. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. On the Overview tab, click Edit on the System purpose card. Select the system purpose attributes for your host. Click Save . CLI procedure Log in to the host and edit the required system purpose attributes. For example, set the usage type to Production , the role to Red Hat Enterprise Linux Server , and add the addon add on. For the list of values, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . Verify the system purpose attributes for this host: 2.5. Editing the system purpose of multiple hosts You can edit the system purpose attributes of Red Hat Enterprise Linux hosts. For more information about system purpose, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . Prerequisites The hosts that you want to edit must be registered with the subscription-manager. Procedure In the Satellite web UI, navigate to Hosts > Content Hosts and select Red Hat Enterprise Linux 8 hosts that you want to edit. Click the Select Action list and select Manage System Purpose . Select the system purpose attributes that you want to assign to the selected hosts. You can select one of the following values: A specific attribute to set an all selected hosts. No Change to keep the attribute set on the selected hosts. None (Clear) to clear the attribute on the selected hosts. Click Assign . 2.6. Changing a module stream for a host If you have a host running Red Hat Enterprise Linux 8, you can modify the module stream for the repositories you install. You can enable, disable, install, update, and remove module streams from your host in the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click the Content tab, then click the Module streams tab. Click the vertical ellipsis to the module and select the action you want to perform. You get a REX job notification once the remote execution job is complete. 2.7. Enabling custom repositories on content hosts You can enable all custom repositories on content hosts using the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select a host. Select the Content tab, then select Repository sets . From the dropdown, you can filter the Repository type column to Custom . Select the desired number of repositories or click the Select All checkbox to select all repositories, then click the vertical ellipsis, and select Override to Enabled . 2.8. Changing the content source of a host A content source is a Capsule that a host consumes content from. Use this procedure to change the content source for a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click the vertical ellipsis icon to the Edit button and select Change content source . Select Content Source , Lifecycle Content View , and Content Source from the lists. Click Change content source . Note Some lifecycle environments can be unavailable for selection if they are not synced on the selected content source. For more information, see Adding lifecycle environments to Capsule Servers in Managing content . You can either complete the content source change using remote execution or manually. To update configuration on host using remote execution, click Run job invocation . For more information about running remote execution jobs, see Configuring and setting up remote jobs in Managing hosts . To update the content source manually, execute the autogenerated commands from Change content source on the host. 2.9. Changing the environment of a host Use this procedure to change the environment of a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. On the Content view environment card, click the options icon and select Edit content view environments . Select the environment. Select the content view. Click Save . 2.10. Changing the managed status of a host Hosts provisioned by Satellite are Managed by default. When a host is set to Managed, you can configure additional host parameters from Satellite Server. These additional parameters are listed on the Operating System tab. If you change any settings on the Operating System tab, they will not take effect until you set the host to build and reboot it. If you need to obtain reports about configuration management on systems using an operating system not supported by Satellite, set the host to Unmanaged. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click Edit . Click Manage host or Unmanage host to change the host's status. Click Submit . 2.11. Enabling Tracer on a host Use this procedure to enable Tracer on Satellite and access Traces. Tracer displays a list of services and applications that need to be restarted. Traces is the output generated by Tracer in the Satellite web UI. Prerequisites Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . Remote execution is enabled. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. On the Traces tab, click Enable Traces . Select the provider to install katello-host-tools-tracer from the list. Click Enable Tracer . You get a REX job notification after the remote execution job is complete. 2.12. Restarting applications on a host Use this procedure to restart applications from the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the hosts you want to modify. Select the Traces tab. Select applications that you want to restart. Select Restart via remote execution from the Restart app list. You will get a REX job notification once the remote execution job is complete. 2.13. Assigning a host to a specific organization Use this procedure to assign a host to a specific organization. For general information about organizations and how to configure them, see Managing Organizations in Administering Red Hat Satellite . Note If your host is already registered with a different organization, you must first unregister the host before assigning it to a new organization. To unregister the host, run subscription-manager unregister on the host. After you assign the host to a new organization, you can re-register the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox of the host you want to change. From the Select Action list, select Assign Organization . A new option window opens. From the Select Organization list, select the organization that you want to assign your host to. Select the checkbox Fix Organization on Mismatch . Note A mismatch happens if there is a resource associated with a host, such as a domain or subnet, and at the same time not associated with the organization you want to assign the host to. The option Fix Organization on Mismatch will add such a resource to the organization, and is therefore the recommended choice. The option Fail on Mismatch will always result in an error message. For example, reassigning a host from one organization to another will fail, even if there is no actual mismatch in settings. Click Submit . 2.14. Assigning a host to a specific location Use this procedure to assign a host to a specific location. For general information about locations and how to configure them, see Creating a Location in Managing content . Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox of the host you want to change. From the Select Action list, select Assign Location . A new option window opens. Navigate to the Select Location list and choose the location that you want for your host. Select the checkbox Fix Location on Mismatch . Note A mismatch happens if there is a resource associated with a host, such as a domain or subnet, and at the same time not associated with the location you want to assign the host to. The option Fix Location on Mismatch will add such a resource to the location, and is therefore the recommended choice. The option Fail on Mismatch will always result in an error message. For example, reassigning a host from one location to another will fail, even if there is no actual mismatch in settings. Click Submit . 2.15. Switching between hosts When you are on a particular host in the Satellite web UI, you can navigate between hosts without leaving the page by using the host switcher. Click ⇄ to the hostname. This displays a list of hosts in alphabetical order with a pagination arrow and a search bar to find the host you are looking for. 2.16. Viewing host details from a content host Use this procedure to view the host details page from a content host. Procedure In the Satellite web UI, navigate to Hosts > Content Hosts Click the content host you want to view. Select the Details tab to see the host details page. The cards in the Details tab show details for the System properties , BIOS , Networking interfaces , Operating system , Provisioning templates , and Provisioning . Registered content hosts show additional cards for Registration details , Installed products , and HW properties providing information about Model , Number of CPU(s) , Sockets , Cores per socket , and RAM . 2.17. Selecting host columns You can select what columns you want to see in the host table on the Hosts > All Hosts page. Note It is not possible to deselect the Name column. The Name column serves as a primary identification method of the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Manage columns . Select columns that you want to display. You can select individual columns or column categories. Selecting or deselecting a category selects or deselects all columns in that category. Note Some columns are included in more than one category, but you can display a column of a specific type only once. By selecting or deselecting a specific column, you select or deselect all instances of that column. Verification You can now see the selected columns in the host table. 2.18. Removing a host from Satellite Use this procedure to remove a host from Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > All Hosts or Hosts > Content Hosts . Note that there is no difference from what page you remove a host, from All Hosts or Content Hosts . In both cases, Satellite removes a host completely. Select the hosts that you want to remove. From the Select Action list, select Delete Hosts . Click Submit to remove the host from Satellite permanently. Warning By default, the Destroy associated VM on host delete setting is set to no . If a host record that is associated with a virtual machine is deleted, the virtual machine will remain on the compute resource. To delete a virtual machine on the compute resource, navigate to Administer > Settings and select the Provisioning tab. Setting Destroy associated VM on host delete to yes deletes the virtual machine if the host record that is associated with the virtual machine is deleted. To avoid deleting the virtual machine in this situation, disassociate the virtual machine from Satellite without removing it from the compute resource or change the setting. CLI procedure Delete your host from Satellite: Alternatively, you can use --name My_Host_Name instead of --id My_Host_ID . 2.18.1. Disassociating a virtual machine from Satellite without removing it from a hypervisor Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox to the left of the hosts that you want to disassociate. From the Select Action list, click Disassociate Hosts . Optional: Select the checkbox to keep the hosts for future action. Click Submit . 2.19. Lifecycle status of RHEL hosts Satellite provides multiple mechanisms to display information about upcoming End of Support (EOS) events for your Red Hat Enterprise Linux hosts: Notification banner A column on the Hosts index page Alert on the Hosts index page for each host that runs Red Hat Enterprise Linux with an upcoming EOS event in a year as well as when support has ended Ability to Search for hosts by EOS on the Hosts index page Host status card on the host details page For any hosts that are not running Red Hat Enterprise Linux, Satellite displays Unknown in the RHEL Lifecycle status and Last report columns. EOS notification banner When either the end of maintenance support or the end of extended lifecycle support approaches in a year, you will see a notification banner in the Satellite web UI if you have hosts with that Red Hat Enterprise Linux version. The notification provides information about the Red Hat Enterprise Linux version, the number of hosts running that version in your environment, the lifecycle support, and the expiration date. Along with other information, the Red Hat Enterprise Linux lifecycle column is visible in the notification. 2.19.1. Displaying RHEL lifecycle status You can display the status of the end of support (EOS) for your Red Hat Enterprise Linux hosts in the table on the Hosts index page. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Manage columns . Select the Content column to expand it. Select RHEL Lifecycle status . Click Save to generate a new column that displays the Red Hat Enterprise Linux lifecycle status. 2.19.2. Host search by RHEL lifecycle status You can use the Search field to search hosts by rhel_lifecycle_status . It can have one of the following values: full_support maintenance_support approaching_end_of_maintenance extended_support approaching_end_of_support support_ended
[ "hammer host create --ask-root-password yes --hostgroup \" My_Host_Group \" --interface=\"primary=true, provision=true, mac= My_MAC_Address , ip= My_IP_Address \" --location \" My_Location \" --name \" My_Host_Name \" --organization \" My_Organization \"", "subscription-manager syspurpose set usage ' Production ' subscription-manager syspurpose set role ' Red Hat Enterprise Linux Server ' subscription-manager syspurpose add addons ' your_addon '", "subscription-manager syspurpose", "hammer host delete --id My_Host_ID --location-id My_Location_ID --organization-id My_Organization_ID" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/Administering_Hosts_managing-hosts
Chapter 27. Configuring cluster quorum
Chapter 27. Configuring cluster quorum A Red Hat Enterprise Linux High Availability Add-On cluster uses the votequorum service, in conjunction with fencing, to avoid split brain situations. A number of votes is assigned to each system in the cluster, and cluster operations are allowed to proceed only when a majority of votes is present. The service must be loaded into all nodes or none; if it is loaded into a subset of cluster nodes, the results will be unpredictable. For information about the configuration and operation of the votequorum service, see the votequorum (5) man page. 27.1. Configuring quorum options There are some special features of quorum configuration that you can set when you create a cluster with the pcs cluster setup command. The following table summarizes these options. Table 27.1. Quorum Options Option Description auto_tie_breaker When enabled, the cluster can suffer up to 50% of the nodes failing at the same time, in a deterministic fashion. The cluster partition, or the set of nodes that are still in contact with the nodeid configured in auto_tie_breaker_node (or lowest nodeid if not set), will remain quorate. The other nodes will be inquorate. The auto_tie_breaker option is principally used for clusters with an even number of nodes, as it allows the cluster to continue operation with an even split. For more complex failures, such as multiple, uneven splits, it is recommended that you use a quorum device. The auto_tie_breaker option is incompatible with quorum devices. wait_for_all When enabled, the cluster will be quorate for the first time only after all nodes have been visible at least once at the same time. The wait_for_all option is primarily used for two-node clusters and for even-node clusters using the quorum device lms (last man standing) algorithm. The wait_for_all option is automatically enabled when a cluster has two nodes, does not use a quorum device, and auto_tie_breaker is disabled. You can override this by explicitly setting wait_for_all to 0. last_man_standing When enabled, the cluster can dynamically recalculate expected_votes and quorum under specific circumstances. You must enable wait_for_all when you enable this option. The last_man_standing option is incompatible with quorum devices. last_man_standing_window The time, in milliseconds, to wait before recalculating expected_votes and quorum after a cluster loses nodes. For further information about configuring and using these options, see the votequorum (5) man page. 27.2. Modifying quorum options You can modify general quorum options for your cluster with the pcs quorum update command. Executing this command requires that the cluster be stopped. For information on the quorum options, see the votequorum (5) man page. The format of the pcs quorum update command is as follows. The following series of commands modifies the wait_for_all quorum option and displays the updated status of the option. Note that the system does not allow you to execute this command while the cluster is running. 27.3. Displaying quorum configuration and status Once a cluster is running, you can enter the following cluster quorum commands to display the quorum configuration and status. The following command shows the quorum configuration. The following command shows the quorum runtime status. 27.4. Running inquorate clusters If you take nodes out of a cluster for a long period of time and the loss of those nodes would cause quorum loss, you can change the value of the expected_votes parameter for the live cluster with the pcs quorum expected-votes command. This allows the cluster to continue operation when it does not have quorum. Warning Changing the expected votes in a live cluster should be done with extreme caution. If less than 50% of the cluster is running because you have manually changed the expected votes, then the other nodes in the cluster could be started separately and run cluster services, causing data corruption and other unexpected results. If you change this value, you should ensure that the wait_for_all parameter is enabled. The following command sets the expected votes in the live cluster to the specified value. This affects the live cluster only and does not change the configuration file; the value of expected_votes is reset to the value in the configuration file in the event of a reload. In a situation in which you know that the cluster is inquorate but you want the cluster to proceed with resource management, you can use the pcs quorum unblock command to prevent the cluster from waiting for all nodes when establishing quorum. Note This command should be used with extreme caution. Before issuing this command, it is imperative that you ensure that nodes that are not currently in the cluster are switched off and have no access to shared resources.
[ "pcs quorum update [auto_tie_breaker=[0|1]] [last_man_standing=[0|1]] [last_man_standing_window=[ time-in-ms ] [wait_for_all=[0|1]]", "pcs quorum update wait_for_all=1 Checking corosync is not running on nodes Error: node1: corosync is running Error: node2: corosync is running pcs cluster stop --all node2: Stopping Cluster (pacemaker) node1: Stopping Cluster (pacemaker) node1: Stopping Cluster (corosync) node2: Stopping Cluster (corosync) pcs quorum update wait_for_all=1 Checking corosync is not running on nodes node2: corosync is not running node1: corosync is not running Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded pcs quorum config Options: wait_for_all: 1", "pcs quorum [config]", "pcs quorum status", "pcs quorum expected-votes votes", "pcs quorum unblock" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_configuring-cluster-quorum-configuring-and-managing-high-availability-clusters
8.147. perl-Net-DNS
8.147. perl-Net-DNS 8.147.1. RHBA-2013:0785 - perl-Net-DNS bug fix update Updated perl-Net-DNS packages that fix one bug are now available for Red Hat Enterprise Linux 6. The Perl packages provide the high-level programming language Perl, which is commonly used for system administration utilities and web programming. Bug Fix BZ#766357 Previously, dynamic update of an AAAA record caused the DNS module to return a FORMERR error on the prerequisite caused by the AAAA record creating the RDATA entry, even when the address was never specified. Consequently, removing an AAAA record from a DNS zone failed. This update adds a check to ensure that required data are defined and removing AAAA records now works as expected. Users of perl-Net-DNS are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/perl-net-dns
Chapter 28. Create a product
Chapter 28. Create a product The product listing provides marketing and technical information, showcasing your product's features and advantages to potential customers. It lays the foundation for adding all necessary components to your product for certification. Prerequisites Verify the functionality of your product on the target Red Hat platform, in addition to the specific certification testing requirements. If running your product on the targeted Red Hat platform results in a substandard experience then you must resolve the issues before certification. Certify your chart's container images as a container application before creating a Helm chart component. Procedure Red Hat recommends completing all optional fields in the listing tabs for a comprehensive product listing. More information helps mutual customers make informed choices. Red Hat encourages collaboration with your product manager, marketing representative, or other product experts when entering information for your product listing. Fields marked with an asterisk (*) are mandatory. Procedure Log in to the Red Hat Partner Connect Portal . Go to the Certified technology portal tab and click Visit the portal . On the header bar, click Product management . From the Listing and certification tab click Manage products . From the My Products page, click Create Product . A Create New Product dialog opens. Enter the Product name . From the What kind of product would you like to certify? drop-down, select the required product category and click Create product . For example, select Containerized Application for creating a containerized product listing. A new page with your Product name opens. It comprises the following tabs: Section 28.1, "Overview for Helm charts" Section 28.2, "Product Information for Helm charts" Section 28.3, "Components for Helm charts" Section 28.4, "Support for Helm charts" Along with the following tabs, the page header provides the Product Score details. Product Score evaluates your product information and displays a score. It can be: Fair Good Excellent Best Click How do I improve my score? to improve your product score. After providing the product listing details, click Save before moving to the section. 28.1. Overview for Helm charts This tab consists of a series of tasks that you must complete to publish your product: Section 28.1.1, "Complete product listing details for Helm charts" Section 28.1.2, "Complete company profile information for Helm charts" Section 28.1.3, "Accept legal agreements for Helm charts" Section 28.1.4, "Certify or validate your Helm charts" Section 28.1.5, "Validate your Helm charts" Section 28.1.6, "Add at least one product component for Helm charts" Section 28.1.7, "Certify components for your listing for Helm charts" 28.1.1. Complete product listing details for Helm charts To complete your product listing details, click Start . The Product Information tab opens. Enter all the essential product details and click Save . 28.1.2. Complete company profile information for Helm charts To complete your company profile information, click Start . After entering all the details, click Submit . To modify the existing details, click Review . The Account Details page opens. Review and modify the Company profile information and click Submit . 28.1.3. Accept legal agreements for Helm charts To publish your product image, agree to the terms regarding the distribution of partner container images. To accept the legal agreements, click Start . To preview or download the agreement, click Review . The Red Hat Partner Connect Container Appendix document displays. Read the document to know the terms related to the distribution of container images. 28.1.4. Certify or validate your Helm charts It is not possible to validate a product that already has a certified component. Certifying a component is not required in order to validate a product. To select validation or certification for your product, click Validate or Certify product . Read the Publication and testing guidelines . To certify, click Add Component and then go to Section 28.1.6, "Add at least one product component for Helm charts" . To validate, click Start validation . After submitting your Helm chart for validation, the Red Hat certification team will review and verify the entered details of the Partner validation questionnaire. If at a later date you want to certify your Partner Validated Helm chart, complete the certification details. 28.1.5. Validate your Helm charts Select What Red Hat products are you validating for? Red Hat Open Shift or Red Hat Enterprise Linux. Select which Red Hat Open Shift or Red Hat Enterprise Linux versions and subversions you want to validate your products for. Click, Start Validation . Enter and complete all the information requested in the Partner validation questionnaire , including documentation, product testing and which Red Hat Open Shift or Red Hat cluster it has been tested on. The entered details in the questionnaire will be used by Red Hat to determine whether to validate the product and if it can be published. 28.1.6. Add at least one product component for Helm charts Click Start . You are redirected to the Components tab. To add a new or existing product component, click Add component . For adding a new component, In the Component Name text box, enter the component name. For What kind of standalone component are you creating? select the component that you wish to certify. For example, for certifying your Helm Charts, select Helm Chart . Click . In the Chart Name text box, enter a unique name for your chart. Distribution Method - Select one of the following options for publishing your Helm Chart: Helm chart repository charts.openshift.io - The Helm chart is published to the Red Hat Helm chart repository, charts.openshift.io and the users can pull your chart from this repository. Note When you select the checkbox The certified helm chart will be distributed from my company's repository , an entry about the location of your chart is added to the index of Red Hat Helm chart repository, charts.openshift.io . Web catalog only (catalog.redhat.com) - The Helm chart is not published to the Red Hat Helm chart repository, charts.openshift.io and is not visible on Red Hat OpenShift OperatorHub. This is the default option when you create a new component and this option is suitable for partners who do not want their Helm chart publicly installable within OpenShift, but require a proof of certification. Select this option only if you have a distribution, entitlement, or other business requirements that is not otherwise accommodated within the OpenShift In-product Catalog (Certified) option. Click Add component . For adding an existing component, from the Add Component dialog, select Existing Component . From the Available components list, search and select the components that you wish to certify and click the forward arrow. The selected components are added to the Chosen components list. Click Attach existing component . 28.1.7. Certify components for your listing for Helm charts To certify the components for your listing, click Start . If you have existing product components, you can view the list of Attached Components and their details: Name Certification Security Type Created Click more options to archive or remove the components Select the components for certification. After completing all the above tasks you will see a green tick mark corresponding to all the options. The Overview tab also provides the following information: Product contacts - Provides Product marketing and Technical contact information. Click Add contacts to product to provide the contact information Click Edit to update the information. Components in product - Provides the list of the components attached to the product along with their last updated information. Click Add components to product to add new or existing components to your product. Click Edit components to update the existing component information. After publishing the product listing, you can view your Product Readiness Score and Ways to raise your score on the Overview tab. Additional resources For more information about the distribution methods, see Helm Chart Distribution methods . 28.2. Product Information for Helm charts Through this tab you can provide all the essential information about your product. The product details are published along with your product on the Red Hat Ecosystem catalog. General tab: Provide basic details of the product, including product name and description. Enter the Product Name . Optional: Upload the Product Logo according to the defined guidelines. Enter a Brief description and a Long description . Click Save . Features & Benefits tab: Provide important features of your product. Optional: Enter the Title and Description . Optional: To add additional features for your product, click + Add new feature . Click Save . Quick start & Config tab: Add links to any quick start guide or configuration document to help customers deploy and start using your product. Optional: Enter Quick start & configuration instructions . Click Save . Select Hide default instructions check box, if you don't want to display them. Linked resources tab: Add links to supporting documentation to help our customers use your product. The information is mapped to and is displayed in the Documentation section on the product's catalog page. Note It is mandatory to add a minimum of three resources. Red Hat encourages you to add more resources, if available. Select the Type drop-down menu, and enter the Title and Description of the resource. Enter the Resource URL . Optional: To add additional resources for your product, click + Add new Resource . Click Save . FAQs tab: Add frequently asked questions and answers of the product's purpose, operation, installation, or other attribute details. You can include common customer queries about your product and services. Enter Question and Answer . Optional: To add additional FAQs for your product, click + Add new FAQ . Click Save . Support tab: This tab lets you provide contact information of your Support team. Enter the Support description , Support web site , Support phone number , and Support email address . Click Save . Contacts tab: Provide contact information of your marketing and technical team. Enter the Marketing contact email address and Technical contact email address . Optional: To add additional contacts, click + Add another . Click Save . Legal tab: Provide the product related license and policy information. Enter the License Agreement URL for the product and Privacy Policy URL . Click Save . SEO tab: Use this tab to improve the discoverability of your product for our mutual customers, enhancing visibility both within the Red Hat Ecosystem Catalog search and on internet search engines. Providing a higher number of search aliases (key and value pairs) will increase the discoverability of your product. Select the Product Category . Enter the Key and Value to set up Search aliases. Click Save . Optional: To add additional key-value pair, click + Add new key-value pair . Note Add at least one Search alias for your product. Red Hat encourages you to add more aliases, if available. 28.3. Components for Helm charts Use this tab to add components to your product listing. Through this tab you can also view a list of attached components linked to your Product Listing. Alternatively, to attach a component to the Product Listing, you can complete the Add at least one product component option available in the Overview tab of a Container, Operator, or Helm Chart product listing. To add a new or existing product component, click Add component . For adding a new component, In the Component Name text box, enter the component name. For What kind of OpenShift component are you creating? select the component that you wish to certify. For example, for certifying your Helm Charts, select Helm Chart . Click . In the Chart Name text box, enter a unique name for your chart. Distribution Method - Select one of the following options for publishing your Helm Chart: Helm chart repository charts.openshift.io - The Helm chart is published to the Red Hat Helm chart repository, charts.openshift.io and the users can pull your chart from this repository. Note When you select the checkbox The certified helm chart will be distributed from my company's repository , an entry about the location of your chart is added to the index of Red Hat Helm chart repository, charts.openshift.io . Web catalog only (catalog.redhat.com) - The Helm chart is not published to the Red Hat Helm chart repository, charts.openshift.io and is not visible in the Red Hat OpenShift Developer Console. This is the default option when you create a new component and this option is suitable for partners who do not want their Helm chart publicly installable within OpenShift, but require a proof of certification. Select this option only if you have a distribution, entitlement, or other business requirements that is not otherwise accommodated within the OpenShift In-product Catalog (Certified) option. Click Add component . For adding an existing component, from the Add Component dialog, select Existing Component . From the Available components list, search and select the components that you wish to certify and click the forward arrow. The selected components are added to the Chosen components list. Click Attach existing component . Note You can add the same component to multiple products listings. All attached components must be published before the product listing can be published. After attaching components, you can view the list of Attached Components and their details: Name Certification Security Type Created Click more options to archive or remove the attached components Alternatively, to search for specific components, type the component's name in the Search by component Name text box. 28.4. Support for Helm charts The Red Hat Partner Acceleration Desk (PAD) is a Products and Technologies level partner help desk service that allows the current and prospective partners a central location to ask non-technical questions pertaining to Red Hat offerings, partner programs, product certification, engagement process, and so on. You can also contact the Red Hat Partner Acceleration Desk for any technical questions you may have regarding the Certification. Technical help requests will be redirected to the Certification Operations team. Through the Partner Subscriptions program, Red Hat offers free, not-for-resale software subscriptions that you can use to validate your product on the target Red Hat platform. To request access to the program, follow the instructions on the Partner Subscriptions site. To request support, click Open a support case. See PAD - How to open & manage PAD cases , to open a PAD ticket. To view the list of existing support cases, click View support cases . 28.5. Removing a product After creating a product listing if you wish to remove it, go to the Overview tab and click Delete . A published product must first be unpublished before it can be deleted. Red Hat retains information related to deleted products even after you delete the product.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/create-a-product-for-helmcharts_openshift-sw-cert-workflow-validating-helm-charts-for-certification
Chapter 2. APIRequestCount [apiserver.openshift.io/v1]
Chapter 2. APIRequestCount [apiserver.openshift.io/v1] Description APIRequestCount tracks requests made to an API. The instance name must be of the form resource.version.group , matching the resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines the characteristics of the resource. status object status contains the observed state of the resource. 2.1.1. .spec Description spec defines the characteristics of the resource. Type object Property Type Description numberOfUsersToReport integer numberOfUsersToReport is the number of users to include in the report. If unspecified or zero, the default is ten. This is default is subject to change. 2.1.2. .status Description status contains the observed state of the resource. Type object Property Type Description conditions array conditions contains details of the current status of this API Resource. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } currentHour object currentHour contains request history for the current hour. This is porcelain to make the API easier to read by humans seeing if they addressed a problem. This field is reset on the hour. last24h array last24h contains request history for the last 24 hours, indexed by the hour, so 12:00AM-12:59 is in index 0, 6am-6:59am is index 6, etc. The index of the current hour is updated live and then duplicated into the requestsLastHour field. last24h[] object PerResourceAPIRequestLog logs request for various nodes. removedInRelease string removedInRelease is when the API will be removed. requestCount integer requestCount is a sum of all requestCounts across all current hours, nodes, and users. 2.1.3. .status.conditions Description conditions contains details of the current status of this API Resource. Type array 2.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 2.1.5. .status.currentHour Description currentHour contains request history for the current hour. This is porcelain to make the API easier to read by humans seeing if they addressed a problem. This field is reset on the hour. Type object Property Type Description byNode array byNode contains logs of requests per node. byNode[] object PerNodeAPIRequestLog contains logs of requests to a certain node. requestCount integer requestCount is a sum of all requestCounts across nodes. 2.1.6. .status.currentHour.byNode Description byNode contains logs of requests per node. Type array 2.1.7. .status.currentHour.byNode[] Description PerNodeAPIRequestLog contains logs of requests to a certain node. Type object Property Type Description byUser array byUser contains request details by top .spec.numberOfUsersToReport users. Note that because in the case of an apiserver, restart the list of top users is determined on a best-effort basis, the list might be imprecise. In addition, some system users may be explicitly included in the list. byUser[] object PerUserAPIRequestCount contains logs of a user's requests. nodeName string nodeName where the request are being handled. requestCount integer requestCount is a sum of all requestCounts across all users, even those outside of the top 10 users. 2.1.8. .status.currentHour.byNode[].byUser Description byUser contains request details by top .spec.numberOfUsersToReport users. Note that because in the case of an apiserver, restart the list of top users is determined on a best-effort basis, the list might be imprecise. In addition, some system users may be explicitly included in the list. Type array 2.1.9. .status.currentHour.byNode[].byUser[] Description PerUserAPIRequestCount contains logs of a user's requests. Type object Property Type Description byVerb array byVerb details by verb. byVerb[] object PerVerbAPIRequestCount requestCounts requests by API request verb. requestCount integer requestCount of requests by the user across all verbs. userAgent string userAgent that made the request. The same user often has multiple binaries which connect (pods with many containers). The different binaries will have different userAgents, but the same user. In addition, we have userAgents with version information embedded and the userName isn't likely to change. username string userName that made the request. 2.1.10. .status.currentHour.byNode[].byUser[].byVerb Description byVerb details by verb. Type array 2.1.11. .status.currentHour.byNode[].byUser[].byVerb[] Description PerVerbAPIRequestCount requestCounts requests by API request verb. Type object Property Type Description requestCount integer requestCount of requests for verb. verb string verb of API request (get, list, create, etc... ) 2.1.12. .status.last24h Description last24h contains request history for the last 24 hours, indexed by the hour, so 12:00AM-12:59 is in index 0, 6am-6:59am is index 6, etc. The index of the current hour is updated live and then duplicated into the requestsLastHour field. Type array 2.1.13. .status.last24h[] Description PerResourceAPIRequestLog logs request for various nodes. Type object Property Type Description byNode array byNode contains logs of requests per node. byNode[] object PerNodeAPIRequestLog contains logs of requests to a certain node. requestCount integer requestCount is a sum of all requestCounts across nodes. 2.1.14. .status.last24h[].byNode Description byNode contains logs of requests per node. Type array 2.1.15. .status.last24h[].byNode[] Description PerNodeAPIRequestLog contains logs of requests to a certain node. Type object Property Type Description byUser array byUser contains request details by top .spec.numberOfUsersToReport users. Note that because in the case of an apiserver, restart the list of top users is determined on a best-effort basis, the list might be imprecise. In addition, some system users may be explicitly included in the list. byUser[] object PerUserAPIRequestCount contains logs of a user's requests. nodeName string nodeName where the request are being handled. requestCount integer requestCount is a sum of all requestCounts across all users, even those outside of the top 10 users. 2.1.16. .status.last24h[].byNode[].byUser Description byUser contains request details by top .spec.numberOfUsersToReport users. Note that because in the case of an apiserver, restart the list of top users is determined on a best-effort basis, the list might be imprecise. In addition, some system users may be explicitly included in the list. Type array 2.1.17. .status.last24h[].byNode[].byUser[] Description PerUserAPIRequestCount contains logs of a user's requests. Type object Property Type Description byVerb array byVerb details by verb. byVerb[] object PerVerbAPIRequestCount requestCounts requests by API request verb. requestCount integer requestCount of requests by the user across all verbs. userAgent string userAgent that made the request. The same user often has multiple binaries which connect (pods with many containers). The different binaries will have different userAgents, but the same user. In addition, we have userAgents with version information embedded and the userName isn't likely to change. username string userName that made the request. 2.1.18. .status.last24h[].byNode[].byUser[].byVerb Description byVerb details by verb. Type array 2.1.19. .status.last24h[].byNode[].byUser[].byVerb[] Description PerVerbAPIRequestCount requestCounts requests by API request verb. Type object Property Type Description requestCount integer requestCount of requests for verb. verb string verb of API request (get, list, create, etc... ) 2.2. API endpoints The following API endpoints are available: /apis/apiserver.openshift.io/v1/apirequestcounts DELETE : delete collection of APIRequestCount GET : list objects of kind APIRequestCount POST : create an APIRequestCount /apis/apiserver.openshift.io/v1/apirequestcounts/{name} DELETE : delete an APIRequestCount GET : read the specified APIRequestCount PATCH : partially update the specified APIRequestCount PUT : replace the specified APIRequestCount /apis/apiserver.openshift.io/v1/apirequestcounts/{name}/status GET : read status of the specified APIRequestCount PATCH : partially update status of the specified APIRequestCount PUT : replace status of the specified APIRequestCount 2.2.1. /apis/apiserver.openshift.io/v1/apirequestcounts Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of APIRequestCount Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind APIRequestCount Table 2.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK APIRequestCountList schema 401 - Unauthorized Empty HTTP method POST Description create an APIRequestCount Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body APIRequestCount schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 201 - Created APIRequestCount schema 202 - Accepted APIRequestCount schema 401 - Unauthorized Empty 2.2.2. /apis/apiserver.openshift.io/v1/apirequestcounts/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the APIRequestCount Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an APIRequestCount Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified APIRequestCount Table 2.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.15. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified APIRequestCount Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.17. Body parameters Parameter Type Description body Patch schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified APIRequestCount Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body APIRequestCount schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 201 - Created APIRequestCount schema 401 - Unauthorized Empty 2.2.3. /apis/apiserver.openshift.io/v1/apirequestcounts/{name}/status Table 2.22. Global path parameters Parameter Type Description name string name of the APIRequestCount Table 2.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified APIRequestCount Table 2.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.25. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified APIRequestCount Table 2.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.27. Body parameters Parameter Type Description body Patch schema Table 2.28. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified APIRequestCount Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body APIRequestCount schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK APIRequestCount schema 201 - Created APIRequestCount schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/metadata_apis/apirequestcount-apiserver-openshift-io-v1
Chapter 4. API Requests in Different Languages
Chapter 4. API Requests in Different Languages This chapter outlines sending API requests to Red Hat Satellite with curl, Ruby, and Python and provides examples. 4.1. API Requests with curl This section outlines how to use curl with the Satellite API to perform various tasks. Red Hat Satellite requires the use of HTTPS, and by default a certificate for host identification. If you have not added the Satellite Server certificate as described in Section 3.1, "SSL Authentication Overview" , then you can use the --insecure option to bypass certificate checks. For user authentication, you can use the --user option to provide Satellite user credentials in the form --user username:password or, if you do not include the password, the command prompts you to enter it. To reduce security risks, do not include the password as part of the command, because it then becomes part of your shell history. Examples in this section include the password only for the sake of simplicity. Be aware that if you use the --silent option, curl does not display a progress meter or any error messages. Examples in this chapter use the Python json.tool module to format the output. 4.1.1. Passing JSON Data to the API Request You can pass data to Satellite Server with the API request. The data must be in JSON format. When specifying JSON data with the --data option, you must set the following HTTP headers with the --header option: Use one of the following options to include data with the --data option: The quoted JSON formatted data enclosed in curly braces {} . When passing a value for a JSON type parameter, you must escape quotation marks " with backslashes \ . For example, within curly braces, you must format "Example JSON Variable" as \"Example JSON Variable\" : The unquoted JSON formatted data enclosed in a file and specified by the @ sign and the filename. For example: Using external files for JSON formatted data has the following advantages: You can use your favorite text editor. You can use syntax checker to find and avoid mistakes. You can use tools to check the validity of JSON data or to reformat it. Validating a JSON file Use the json_verify tool to check the validity of a JSON file: 4.1.2. Retrieving a List of Resources This section outlines how to use curl with the Satellite 6 API to request information from your Satellite deployment. These examples include both requests and responses. Expect different results for each deployment. Note The example requests below use python3 to format the respone from the Satellite Server. On RHEL 7 and some older systems, you must use python instead of python3 . Listing Users This example is a basic request that returns a list of Satellite resources, Satellite users in this case. Such requests return a list of data wrapped in metadata, while other request types only return the actual object. Example request: Example response: 4.1.3. Creating and Modifying Resources This section outlines how to use curl with the Satellite 6 API to manipulate resources on the Satellite Server. These API calls require that you pass data in json format with the API call. For more information, see Section 4.1.1, "Passing JSON Data to the API Request" . Creating a User This example creates a user using --data option to provide required information. Example request: Modifying a User This example modifies first name and login of the test_user that was created in Creating a User . Example request: 4.2. API Requests with Ruby This section outlines how to use Ruby with the Satellite API to perform various tasks. Important These are example scripts and commands. Ensure you review these scripts carefully before use, and replace any variables, user names, passwords, and other information to suit your own deployment. 4.2.1. Creating Objects Using Ruby This script connects to the Red Hat Satellite 6 API and creates an organization, and then creates three environments in the organization. If the organization already exists, the script uses that organization. If any of the environments already exist in the organization, the script raises an error and quits. #!/usr/bin/ruby require 'rest-client' require 'json' url = 'https://satellite.example.com/api/v2/' katello_url = "#{url}/katello/api/v2/" USDusername = 'admin' USDpassword = 'changeme' org_name = "MyOrg" environments = [ "Development", "Testing", "Production" ] # Performs a GET using the passed URL location def get_json(location) response = RestClient::Request.new( :method => :get, :url => location, :user => USDusername, :password => USDpassword, :headers => { :accept => :json, :content_type => :json } ).execute JSON.parse(response.to_str) end # Performs a POST and passes the data to the URL location def post_json(location, json_data) response = RestClient::Request.new( :method => :post, :url => location, :user => USDusername, :password => USDpassword, :headers => { :accept => :json, :content_type => :json}, :payload => json_data ).execute JSON.parse(response.to_str) end # Creates a hash with ids mapping to names for an array of records def id_name_map(records) records.inject({}) do |map, record| map.update(record['id'] => record['name']) end end # Get list of existing organizations orgs = get_json("#{katello_url}/organizations") org_list = id_name_map(orgs['results']) if !org_list.has_value?(org_name) # If our organization is not found, create it puts "Creating organization: \t#{org_name}" org_id = post_json("#{katello_url}/organizations", JSON.generate({"name"=> org_name}))["id"] else # Our organization exists, so let's grab it org_id = org_list.key(org_name) puts "Organization \"#{org_name}\" exists" end # Get list of organization's lifecycle environments envs = get_json("#{katello_url}/organizations/#{org_id}/environments") env_list = id_name_map(envs['results']) prior_env_id = env_list.key("Library") # Exit the script if at least one life cycle environment already exists environments.each do |e| if env_list.has_value?(e) puts "ERROR: One of the Environments is not unique to organization" exit end end # Create life cycle environments environments.each do |environment| puts "Creating environment: \t#{environment}" prior_env_id = post_json("#{katello_url}/organizations/#{org_id}/environments", JSON.generate({"name" => environment, "organization_id" => org_id, "prior_id" => prior_env_id}))["id"] end 4.2.2. Using Apipie Bindings with Ruby Apipie bindings are the Ruby bindings for apipie documented API calls. They fetch and cache the API definition from Satellite and then generate API calls on demand. This example creates an organization, and then creates three environments in the organization. If the organization already exists, the script uses that organization. If any of the environments already exist in the organization, the script raises an error and quits. #!/usr/bin/tfm-ruby require 'apipie-bindings' org_name = "MyOrg" environments = [ "Development", "Testing", "Production" ] # Create an instance of apipie bindings @api = ApipieBindings::API.new({ :uri => 'https://satellite.example.com/', :username => 'admin', :password => 'changeme', :api_version => 2 }) # Performs an API call with default options def call_api(resource_name, action_name, params = {}) http_headers = {} apipie_options = { :skip_validation => true } @api.resource(resource_name).call(action_name, params, http_headers, apipie_options) end # Creates a hash with IDs mapping to names for an array of records def id_name_map(records) records.inject({}) do |map, record| map.update(record['id'] => record['name']) end end # Get list of existing organizations orgs = call_api(:organizations, :index) org_list = id_name_map(orgs['results']) if !org_list.has_value?(org_name) # If our organization is not found, create it puts "Creating organization: \t#{org_name}" org_id = call_api(:organizations, :create, {'organization' => { :name => org_name }})['id'] else # Our organization exists, so let's grab it org_id = org_list.key(org_name) puts "Organization \"#{org_name}\" exists" end # Get list of organization's life cycle environments envs = call_api(:lifecycle_environments, :index, {'organization_id' => org_id}) env_list = id_name_map(envs['results']) prior_env_id = env_list.key("Library") # Exit the script if at least one life cycle environment already exists environments.each do |e| if env_list.has_value?(e) puts "ERROR: One of the Environments is not unique to organization" exit end end # Create life cycle environments environments.each do |environment| puts "Creating environment: \t#{environment}" prior_env_id = call_api(:lifecycle_environments, :create, {"name" => environment, "organization_id" => org_id, "prior_id" => prior_env_id })['id'] end 4.3. API Requests with Python This section outlines how to use Python with the Satellite API to perform various tasks. Important These are example scripts and commands. Ensure you review these scripts carefully before use, and replace any variables, user names, passwords, and other information to suit your own deployment. Example scripts in this section do not use SSL verification for interacting with the REST API. 4.3.1. Creating Objects Using Python This script connects to the Red Hat Satellite 6 API and creates an organization, and then creates three environments in the organization. If the organization already exists, the script uses that organization. If any of the environments already exist in the organization, the script raises an error and quits. Python 2 Example #!/usr/bin/python import json import sys try: import requests except ImportError: print "Please install the python-requests module." sys.exit(-1) # URL to your Satellite 6 server URL = "https://satellite.example.com" # URL for the API to your deployed Satellite 6 server SAT_API = "%s/katello/api/v2/" % URL # Katello-specific API KATELLO_API = "%s/katello/api/" % URL POST_HEADERS = {'content-type': 'application/json'} # Default credentials to login to Satellite 6 USERNAME = "admin" PASSWORD = "changeme" # Ignore SSL for now SSL_VERIFY = False # Name of the organization to be either created or used ORG_NAME = "MyOrg" # Name for life cycle environments to be either created or used ENVIRONMENTS = ["Development", "Testing", "Production"] def get_json(location): """ Performs a GET using the passed URL location """ r = requests.get(location, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def post_json(location, json_data): """ Performs a POST and passes the data to the URL location """ result = requests.post( location, data=json_data, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY, headers=POST_HEADERS) return result.json() def main(): """ Main routine that creates or re-uses an organization and life cycle environments. If life cycle environments already exist, exit out. """ # Check if our organization already exists org = get_json(SAT_API + "organizations/" + ORG_NAME) # If our organization is not found, create it if org.get('error', None): org_id = post_json( SAT_API + "organizations/", json.dumps({"name": ORG_NAME}))["id"] print "Creating organization: \t" + ORG_NAME else: # Our organization exists, so let's grab it org_id = org['id'] print "Organization '%s' exists." % ORG_NAME # Now, let's fetch all available life cycle environments for this org... envs = get_json( SAT_API + "organizations/" + str(org_id) + "/environments/") # ... and add them to a dictionary, with respective 'Prior' environment prior_env_id = 0 env_list = {} for env in envs['results']: env_list[env['id']] = env['name'] prior_env_id = env['id'] if env['name'] == "Library" else prior_env_id # Exit the script if at least one life cycle environment already exists if all(environment in env_list.values() for environment in ENVIRONMENTS): print "ERROR: One of the Environments is not unique to organization" sys.exit(-1) # Create life cycle environments for environment in ENVIRONMENTS: new_env_id = post_json( SAT_API + "organizations/" + str(org_id) + "/environments/", json.dumps( { "name": environment, "organization_id": org_id, "prior": prior_env_id} ))["id"] print "Creating environment: \t" + environment prior_env_id = new_env_id if __name__ == "__main__": main() 4.3.2. Requesting information from the API using Python This is an example script that uses Python for various API requests. Python 2 Example #!/usr/bin/python import json import sys try: import requests except ImportError: print "Please install the python-requests module." sys.exit(-1) SAT_API = 'https://satellite.example.com/api/v2/' USERNAME = "admin" PASSWORD = "password" SSL_VERIFY = False # Ignore SSL for now def get_json(url): # Performs a GET using the passed URL location r = requests.get(url, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def get_results(url): jsn = get_json(url) if jsn.get('error'): print "Error: " + jsn['error']['message'] else: if jsn.get('results'): return jsn['results'] elif 'results' not in jsn: return jsn else: print "No results found" return None def display_all_results(url): results = get_results(url) if results: print json.dumps(results, indent=4, sort_keys=True) def display_info_for_hosts(url): hosts = get_results(url) if hosts: for host in hosts: print "ID: %-10d Name: %-30s IP: %-20s OS: %-30s" % (host['id'], host['name'], host['ip'], host['operatingsystem_name']) def main(): host = 'satellite.example.com' print "Displaying all info for host %s ..." % host display_all_results(SAT_API + 'hosts/' + host) print "Displaying all facts for host %s ..." % host display_all_results(SAT_API + 'hosts/%s/facts' % host) host_pattern = 'example' print "Displaying basic info for hosts matching pattern '%s'..." % host_pattern display_info_for_hosts(SAT_API + 'hosts?search=' + host_pattern) environment = 'production' print "Displaying basic info for hosts in environment %s..." % environment display_info_for_hosts(SAT_API + 'hosts?search=environment=' + environment) model = 'RHEV Hypervisor' print "Displaying basic info for hosts with model name %s..." % model display_info_for_hosts(SAT_API + 'hosts?search=model="' + model + '"') if __name__ == "__main__": main() Python 3 Example #!/usr/bin/env python3 import json import sys try: import requests except ImportError: print("Please install the python-requests module.") sys.exit(-1) SAT = "satellite.example.com" # URL for the API to your deployed Satellite 6 server SAT_API = f"https://{SAT}/api/" KATELLO_API = f"https://{SAT}/katello/api/v2/" POST_HEADERS = {'content-type': 'application/json'} # Default credentials to login to Satellite 6 USERNAME = "admin" PASSWORD = "password" # Ignore SSL for now SSL_VERIFY = False #SSL_VERIFY = "./path/to/CA-certificate.crt" # Put the path to your CA certificate here to allow SSL_VERIFY def get_json(url): # Performs a GET using the passed URL location r = requests.get(url, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def get_results(url): jsn = get_json(url) if jsn.get('error'): print("Error: " + jsn['error']['message']) else: if jsn.get('results'): return jsn['results'] elif 'results' not in jsn: return jsn else: print("No results found") return None def display_all_results(url): results = get_results(url) if results: print(json.dumps(results, indent=4, sort_keys=True)) def display_info_for_hosts(url): hosts = get_results(url) if hosts: print(f"{'ID':10}{'Name':40}{'IP':30}{'Operating System':30}") for host in hosts: print(f"{str(host['id']):10}{host['name']:40}{str(host['ip']):30}{str(host['operatingsystem_name']):30}") def display_info_for_subs(url): subs = get_results(url) if subs: print(f"{'ID':10}{'Name':90}{'Start Date':30}") for sub in subs: print(f"{str(sub['id']):10}{sub['name']:90}{str(sub['start_date']):30}") def main(): host = SAT print(f"Displaying all info for host {host} ...") display_all_results(SAT_API + 'hosts/' + host) print(f"Displaying all facts for host {host} ...") display_all_results(SAT_API + f'hosts/{host}/facts') host_pattern = 'example' print(f"Displaying basic info for hosts matching pattern '{host_pattern}'...") display_info_for_hosts(SAT_API + 'hosts?per_page=1&search=name~' + host_pattern) print(f"Displaying basic info for subscriptions") display_info_for_subs(KATELLO_API + 'subscriptions') environment = 'production' print(f"Displaying basic info for hosts in environment {environment}...") display_info_for_hosts(SAT_API + 'hosts?search=environment=' + environment) if __name__ == "__main__": main()
[ "--header \"Accept:application/json\" --header \"Content-Type:application/json\"", "--data {\"id\":44, \"smart_class_parameter\":{\"override\":\"true\", \"parameter_type\":\"json\", \"default_value\":\"{\\\"GRUB_CMDLINE_LINUX\\\": {\\\"audit\\\":\\\"1\\\",\\\"crashkernel\\\":\\\"true\\\"}}\"}}", "--data @ file .json", "json_verify < test_file .json", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/users | python3 -m json.tool", "{ \"page\": 1, \"per_page\": 20, \"results\": [ { \"admin\": false, \"auth_source_id\": 1, \"auth_source_name\": \"Internal\", \"created_at\": \"2018-09-21 08:59:22 UTC\", \"default_location\": null, \"default_organization\": null, \"description\": \"\", \"effective_admin\": false, \"firstname\": \"\", \"id\": 5, \"last_login_on\": \"2018-09-21 09:03:25 UTC\", \"lastname\": \"\", \"locale\": null, \"locations\": [], \"login\": \"test\", \"mail\": \"[email protected]\", \"organizations\": [ { \"id\": 1, \"name\": \"Default Organization\" } ], \"ssh_keys\": [], \"timezone\": null, \"updated_at\": \"2018-09-21 09:04:45 UTC\" }, { \"admin\": true, \"auth_source_id\": 1, \"auth_source_name\": \"Internal\", \"created_at\": \"2018-09-20 07:09:41 UTC\", \"default_location\": null, \"default_organization\": { \"description\": null, \"id\": 1, \"name\": \"Default Organization\", \"title\": \"Default Organization\" }, \"description\": \"\", \"effective_admin\": true, \"firstname\": \"Admin\", \"id\": 4, \"last_login_on\": \"2018-12-07 07:31:09 UTC\", \"lastname\": \"User\", \"locale\": null, \"locations\": [ { \"id\": 2, \"name\": \"Default Location\" } ], \"login\": \"admin\", \"mail\": \"[email protected]\", \"organizations\": [ { \"id\": 1, \"name\": \"Default Organization\" } ], \"ssh_keys\": [], \"timezone\": null, \"updated_at\": \"2018-11-14 08:19:46 UTC\" } ], \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"subtotal\": 2, \"total\": 2 }", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request POST --user sat_username:sat_password --insecure --data \"{\\\"firstname\\\":\\\" Test Name \\\",\\\"mail\\\":\\\" [email protected] \\\",\\\"login\\\":\\\" test_user \\\",\\\"password\\\":\\\" password123 \\\",\\\"auth_source_id\\\": 1 }\" https:// satellite.example.com /api/users | python3 -m json.tool", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user sat_username:sat_password --insecure --data \"{\\\"firstname\\\":\\\" New Test Name \\\",\\\"mail\\\":\\\" [email protected] \\\",\\\"login\\\":\\\" new_test_user \\\",\\\"password\\\":\\\" password123 \\\",\\\"auth_source_id\\\": 1 }\" https:// satellite.example.com /api/users/ 8 | python3 -m json.tool", "#!/usr/bin/ruby require 'rest-client' require 'json' url = 'https://satellite.example.com/api/v2/' katello_url = \"#{url}/katello/api/v2/\" USDusername = 'admin' USDpassword = 'changeme' org_name = \"MyOrg\" environments = [ \"Development\", \"Testing\", \"Production\" ] Performs a GET using the passed URL location def get_json(location) response = RestClient::Request.new( :method => :get, :url => location, :user => USDusername, :password => USDpassword, :headers => { :accept => :json, :content_type => :json } ).execute JSON.parse(response.to_str) end Performs a POST and passes the data to the URL location def post_json(location, json_data) response = RestClient::Request.new( :method => :post, :url => location, :user => USDusername, :password => USDpassword, :headers => { :accept => :json, :content_type => :json}, :payload => json_data ).execute JSON.parse(response.to_str) end Creates a hash with ids mapping to names for an array of records def id_name_map(records) records.inject({}) do |map, record| map.update(record['id'] => record['name']) end end Get list of existing organizations orgs = get_json(\"#{katello_url}/organizations\") org_list = id_name_map(orgs['results']) if !org_list.has_value?(org_name) # If our organization is not found, create it puts \"Creating organization: \\t#{org_name}\" org_id = post_json(\"#{katello_url}/organizations\", JSON.generate({\"name\"=> org_name}))[\"id\"] else # Our organization exists, so let's grab it org_id = org_list.key(org_name) puts \"Organization \\\"#{org_name}\\\" exists\" end Get list of organization's lifecycle environments envs = get_json(\"#{katello_url}/organizations/#{org_id}/environments\") env_list = id_name_map(envs['results']) prior_env_id = env_list.key(\"Library\") Exit the script if at least one life cycle environment already exists environments.each do |e| if env_list.has_value?(e) puts \"ERROR: One of the Environments is not unique to organization\" exit end end # Create life cycle environments environments.each do |environment| puts \"Creating environment: \\t#{environment}\" prior_env_id = post_json(\"#{katello_url}/organizations/#{org_id}/environments\", JSON.generate({\"name\" => environment, \"organization_id\" => org_id, \"prior_id\" => prior_env_id}))[\"id\"] end", "#!/usr/bin/tfm-ruby require 'apipie-bindings' org_name = \"MyOrg\" environments = [ \"Development\", \"Testing\", \"Production\" ] Create an instance of apipie bindings @api = ApipieBindings::API.new({ :uri => 'https://satellite.example.com/', :username => 'admin', :password => 'changeme', :api_version => 2 }) Performs an API call with default options def call_api(resource_name, action_name, params = {}) http_headers = {} apipie_options = { :skip_validation => true } @api.resource(resource_name).call(action_name, params, http_headers, apipie_options) end Creates a hash with IDs mapping to names for an array of records def id_name_map(records) records.inject({}) do |map, record| map.update(record['id'] => record['name']) end end Get list of existing organizations orgs = call_api(:organizations, :index) org_list = id_name_map(orgs['results']) if !org_list.has_value?(org_name) # If our organization is not found, create it puts \"Creating organization: \\t#{org_name}\" org_id = call_api(:organizations, :create, {'organization' => { :name => org_name }})['id'] else # Our organization exists, so let's grab it org_id = org_list.key(org_name) puts \"Organization \\\"#{org_name}\\\" exists\" end Get list of organization's life cycle environments envs = call_api(:lifecycle_environments, :index, {'organization_id' => org_id}) env_list = id_name_map(envs['results']) prior_env_id = env_list.key(\"Library\") Exit the script if at least one life cycle environment already exists environments.each do |e| if env_list.has_value?(e) puts \"ERROR: One of the Environments is not unique to organization\" exit end end # Create life cycle environments environments.each do |environment| puts \"Creating environment: \\t#{environment}\" prior_env_id = call_api(:lifecycle_environments, :create, {\"name\" => environment, \"organization_id\" => org_id, \"prior_id\" => prior_env_id })['id'] end", "#!/usr/bin/python import json import sys try: import requests except ImportError: print \"Please install the python-requests module.\" sys.exit(-1) URL to your Satellite 6 server URL = \"https://satellite.example.com\" URL for the API to your deployed Satellite 6 server SAT_API = \"%s/katello/api/v2/\" % URL Katello-specific API KATELLO_API = \"%s/katello/api/\" % URL POST_HEADERS = {'content-type': 'application/json'} Default credentials to login to Satellite 6 USERNAME = \"admin\" PASSWORD = \"changeme\" Ignore SSL for now SSL_VERIFY = False Name of the organization to be either created or used ORG_NAME = \"MyOrg\" Name for life cycle environments to be either created or used ENVIRONMENTS = [\"Development\", \"Testing\", \"Production\"] def get_json(location): \"\"\" Performs a GET using the passed URL location \"\"\" r = requests.get(location, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def post_json(location, json_data): \"\"\" Performs a POST and passes the data to the URL location \"\"\" result = requests.post( location, data=json_data, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY, headers=POST_HEADERS) return result.json() def main(): \"\"\" Main routine that creates or re-uses an organization and life cycle environments. If life cycle environments already exist, exit out. \"\"\" # Check if our organization already exists org = get_json(SAT_API + \"organizations/\" + ORG_NAME) # If our organization is not found, create it if org.get('error', None): org_id = post_json( SAT_API + \"organizations/\", json.dumps({\"name\": ORG_NAME}))[\"id\"] print \"Creating organization: \\t\" + ORG_NAME else: # Our organization exists, so let's grab it org_id = org['id'] print \"Organization '%s' exists.\" % ORG_NAME # Now, let's fetch all available life cycle environments for this org envs = get_json( SAT_API + \"organizations/\" + str(org_id) + \"/environments/\") # ... and add them to a dictionary, with respective 'Prior' environment prior_env_id = 0 env_list = {} for env in envs['results']: env_list[env['id']] = env['name'] prior_env_id = env['id'] if env['name'] == \"Library\" else prior_env_id # Exit the script if at least one life cycle environment already exists if all(environment in env_list.values() for environment in ENVIRONMENTS): print \"ERROR: One of the Environments is not unique to organization\" sys.exit(-1) # Create life cycle environments for environment in ENVIRONMENTS: new_env_id = post_json( SAT_API + \"organizations/\" + str(org_id) + \"/environments/\", json.dumps( { \"name\": environment, \"organization_id\": org_id, \"prior\": prior_env_id} ))[\"id\"] print \"Creating environment: \\t\" + environment prior_env_id = new_env_id if __name__ == \"__main__\": main()", "#!/usr/bin/python import json import sys try: import requests except ImportError: print \"Please install the python-requests module.\" sys.exit(-1) SAT_API = 'https://satellite.example.com/api/v2/' USERNAME = \"admin\" PASSWORD = \"password\" SSL_VERIFY = False # Ignore SSL for now def get_json(url): # Performs a GET using the passed URL location r = requests.get(url, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def get_results(url): jsn = get_json(url) if jsn.get('error'): print \"Error: \" + jsn['error']['message'] else: if jsn.get('results'): return jsn['results'] elif 'results' not in jsn: return jsn else: print \"No results found\" return None def display_all_results(url): results = get_results(url) if results: print json.dumps(results, indent=4, sort_keys=True) def display_info_for_hosts(url): hosts = get_results(url) if hosts: for host in hosts: print \"ID: %-10d Name: %-30s IP: %-20s OS: %-30s\" % (host['id'], host['name'], host['ip'], host['operatingsystem_name']) def main(): host = 'satellite.example.com' print \"Displaying all info for host %s ...\" % host display_all_results(SAT_API + 'hosts/' + host) print \"Displaying all facts for host %s ...\" % host display_all_results(SAT_API + 'hosts/%s/facts' % host) host_pattern = 'example' print \"Displaying basic info for hosts matching pattern '%s'...\" % host_pattern display_info_for_hosts(SAT_API + 'hosts?search=' + host_pattern) environment = 'production' print \"Displaying basic info for hosts in environment %s...\" % environment display_info_for_hosts(SAT_API + 'hosts?search=environment=' + environment) model = 'RHEV Hypervisor' print \"Displaying basic info for hosts with model name %s...\" % model display_info_for_hosts(SAT_API + 'hosts?search=model=\"' + model + '\"') if __name__ == \"__main__\": main()", "#!/usr/bin/env python3 import json import sys try: import requests except ImportError: print(\"Please install the python-requests module.\") sys.exit(-1) SAT = \"satellite.example.com\" URL for the API to your deployed Satellite 6 server SAT_API = f\"https://{SAT}/api/\" KATELLO_API = f\"https://{SAT}/katello/api/v2/\" POST_HEADERS = {'content-type': 'application/json'} Default credentials to login to Satellite 6 USERNAME = \"admin\" PASSWORD = \"password\" Ignore SSL for now SSL_VERIFY = False #SSL_VERIFY = \"./path/to/CA-certificate.crt\" # Put the path to your CA certificate here to allow SSL_VERIFY def get_json(url): # Performs a GET using the passed URL location r = requests.get(url, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def get_results(url): jsn = get_json(url) if jsn.get('error'): print(\"Error: \" + jsn['error']['message']) else: if jsn.get('results'): return jsn['results'] elif 'results' not in jsn: return jsn else: print(\"No results found\") return None def display_all_results(url): results = get_results(url) if results: print(json.dumps(results, indent=4, sort_keys=True)) def display_info_for_hosts(url): hosts = get_results(url) if hosts: print(f\"{'ID':10}{'Name':40}{'IP':30}{'Operating System':30}\") for host in hosts: print(f\"{str(host['id']):10}{host['name']:40}{str(host['ip']):30}{str(host['operatingsystem_name']):30}\") def display_info_for_subs(url): subs = get_results(url) if subs: print(f\"{'ID':10}{'Name':90}{'Start Date':30}\") for sub in subs: print(f\"{str(sub['id']):10}{sub['name']:90}{str(sub['start_date']):30}\") def main(): host = SAT print(f\"Displaying all info for host {host} ...\") display_all_results(SAT_API + 'hosts/' + host) print(f\"Displaying all facts for host {host} ...\") display_all_results(SAT_API + f'hosts/{host}/facts') host_pattern = 'example' print(f\"Displaying basic info for hosts matching pattern '{host_pattern}'...\") display_info_for_hosts(SAT_API + 'hosts?per_page=1&search=name~' + host_pattern) print(f\"Displaying basic info for subscriptions\") display_info_for_subs(KATELLO_API + 'subscriptions') environment = 'production' print(f\"Displaying basic info for hosts in environment {environment}...\") display_info_for_hosts(SAT_API + 'hosts?search=environment=' + environment) if __name__ == \"__main__\": main()" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/api_guide/chap-red_hat_satellite-api_guide-api_requests_in_different_languages
Chapter 13. Security best practices
Chapter 13. Security best practices You can deploy automation controller to automate typical environments securely. However, managing certain operating system environments, automation, and automation platforms, can require additional best practices to ensure security. To secure Red Hat Enterprise Linux start with the following release-appropriate security guide: For Red Hat Enterprise Linux 8, see Security hardening . For Red Hat Enterprise Linux 9, see Security hardening . 13.1. Understand the architecture of Ansible Automation Platform and automation controller Ansible Automation Platform and automation controller comprise a general-purpose, declarative automation platform. That means that when an Ansible Playbook is launched (by automation controller, or directly on the command line), the playbook, inventory, and credentials provided to Ansible are considered to be the source of truth. If you want policies around external verification of specific playbook content, job definition, or inventory contents, you must complete these processes before the automation is launched, either by the automation controller web UI, or the automation controller API. The use of source control, branching, and mandatory code review is best practice for Ansible automation. There are tools that can help create process flow around using source control in this manner. At a higher level, tools exist that enable creation of approvals and policy-based actions around arbitrary workflows, including automation. These tools can then use Ansible through the automation controller's API to perform automation. You must use a secure default administrator password at the time of automation controller installation. For more information, see Change the automation controller Administrator Password . Automation controller exposes services on certain well-known ports, such as port 80 for HTTP traffic and port 443 for HTTPS traffic. Do not expose automation controller on the open internet, which reduces the threat surface of your installation. 13.1.1. Granting access Granting access to certain parts of the system exposes security risks. Apply the following practices to help secure access: Minimize administrative accounts Minimize local system access Remove access to credentials from users Enforce separation of duties 13.1.2. Minimize administrative accounts Minimizing the access to system administrative accounts is crucial for maintaining a secure system. A system administrator or root user can access, edit, and disrupt any system application. Limit the number of people or accounts with root access, where possible. Do not give out sudo to root or awx (the automation controller user) to untrusted users. Note that when restricting administrative access through mechanisms like sudo , restricting to a certain set of commands can still give a wide range of access. Any command that enables execution of a shell or arbitrary shell commands, or any command that can change files on the system, is equal to full root access. With automation controller, any automation controller "system administrator" or "superuser" account can edit, change, and update an inventory or automation definition in automation controller. Restrict this to the minimum set of users possible for low-level automation controller configuration and disaster recovery only. 13.1.3. Minimize local system access When you use automation controller with best practices, it does not require local user access except for administrative purposes. Non-administrator users do not have access to the automation controller system. 13.1.4. Remove user access to credentials If an automation controller credential is only stored in the controller, you can further secure it. You can configure services such as OpenSSH to only permit credentials on connections from specific addresses. Credentials used by automation can be different from credentials used by system administrators for disaster-recovery or other ad hoc management, allowing for easier auditing. 13.1.5. Enforce separation of duties Different pieces of automation might require access to a system at different levels. For example, you can have low-level system automation that applies patches and performs security baseline checking, while a higher-level piece of automation deploys applications. By using different keys or credentials for each piece of automation, the effect of any one key vulnerability is minimized, while also enabling baseline auditing. 13.2. Available resources Several resources exist in automation controller and elsewhere to ensure a secure platform. Consider using the following functionalities: Existing security functionality External account stores Django password policies 13.2.1. Existing security functionality Do not disable SELinux or automation controller's existing multi-tenant containment. Use automation controller's role-based access control (RBAC) to delegate the minimum level of privileges required to run automation. Use teams in automation controller to assign permissions to groups of users rather than to users individually. Additional resources For more information, see Role-Based Access Controls in Using automation execution . 13.2.2. External account stores Maintaining a full set of users in automation controller can be a time-consuming task in a large organization. Automation controller supports connecting to external account sources by LDAP, SAML 2.0, and certain OAuth providers. Using this eliminates a source of error when working with permissions. 13.2.3. Django password policies Automation controller administrators can use Django to set password policies at creation time through AUTH_PASSWORD_VALIDATORS to validate automation controller user passwords. In the custom.py file located at /etc/tower/conf.d of your automation controller instance, add the following code block example: AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', 'OPTIONS': { 'min_length': 9, } }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] Additional resources For more information, see Password validation in Django in addition to the preceding example. Ensure that you restart your automation controller instance for the change to take effect. For more information, see Start, stop, and restart automation controller .
[ "AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', 'OPTIONS': { 'min_length': 9, } }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ]" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/configuring_automation_execution/controller-security-best-practices
Appendix B. Connecting with JConsole
Appendix B. Connecting with JConsole B.1. Connect to JDG via JConsole JConsole is a JMX GUI that allows a user to connect to a JVM, either local or remote, to monitor the JVM, its MBeans, and execute operations. Procedure B.1. Add Management User to JBoss Data Grid Before being able to connect to a remote JBoss Data Grid instance a user will need to be created; to add a user execute the following steps on the remote instance. Navigate to the bin directory Execute the add-user.sh script. Accept the default option of ManagementUser by pressing return. Accept the default option of ManagementRealm by pressing return. Enter the desired username. In this example jmxadmin will be used. Enter and confirm the password. Accept the default option of no groups by pressing return. Confirm that the desired user will be added to the ManagementRealm by entering yes . Enter no as this user will not be used for connections between processes. The following image shows an example execution run. Figure B.1. Execution of add-user.sh Binding the Management Interface By default JBoss Data Grid will start with the management interface binding to 127.0.0.1. In order to connect remotely this interface must be bound to an IP address that is visible by the network. Either of the following options will correct this: Option 1: Runtime - By adjusting the jboss.bind.address.management property on startup a new IP address can be specified. In the following example JBoss Data Grid is starting with this bound to 192.168.122.5: Option 2: Configuration - Adjust the jboss.bind.address.management in the configuration file. This is found in the interfaces subsystem. A snippet of the configuration file, with the IP adjusted to 192.168.122.5, is provided below: Running JConsole A jconsole.sh script is provided in the USDJDG_HOME/bin directory. Executing this script will launch JConsole. Procedure B.2. Connecting to a remote JBoss Data Grid instance using JConsole Execute the USDJDG_HOME/bin/jconsole.sh script. This will result in the following window appearing: Figure B.2. JConsole Select Remote Process . Enter service:jmx:remoting-jmx://USDIP:9999 in the text area. Enter the username and password, created from the add-user.sh script. Click Connect to initiate the connection. Once connected ensure that the cache-related nodes may be viewed. The following screenshot shows such a node. Figure B.3. JConsole: Showing a Cache Report a bug
[ "cd USDJDG_HOME/bin", "./add-user.sh", "./standalone.sh ... -Djboss.bind.address.management=192.168.122.5", "<interfaces> <interface name=\"management\"> <inet-address value=\"USD{jboss.bind.address.management:192.168.122.5}\"/> </interface> [...] </interface>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/appe-connecting_with_jconsole
function::kernel_string2_utf32
function::kernel_string2_utf32 Name function::kernel_string2_utf32 - Retrieves UTF-32 string from kernel memory with alternative error string Synopsis Arguments addr The kernel address to retrieve the string from err_msg The error message to return when data isn't available Description This function returns a null terminated UTF-8 string converted from the UTF-32 string at a given kernel memory address. Reports the given error message on string copy fault or conversion error.
[ "kernel_string2_utf32:string(addr:long,err_msg:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-kernel-string2-utf32
Chapter 4. Installing the Migration Toolkit for Containers in a restricted network environment
Chapter 4. Installing the Migration Toolkit for Containers in a restricted network environment You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4 in a restricted network environment by performing the following procedures: Create a mirrored Operator catalog . This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the legacy Migration Toolkit for Containers Operator on an OpenShift Container Platform 4.2 to 4.5 source cluster. Install the Migration Toolkit for Containers Operator on the OpenShift Container Platform 4.14 target cluster by using Operator Lifecycle Manager. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster . Install the Migration Toolkit for Containers Operator on the source cluster: OpenShift Container Platform 4.6 or later: Install the Migration Toolkit for Containers Operator by using Operator Lifecycle Manager. OpenShift Container Platform 4.2 to 4.5: Install the legacy Migration Toolkit for Containers Operator from the command line interface. Configure object storage to use as a replication repository. Note To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . To uninstall MTC, see Uninstalling MTC and deleting resources . 4.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.9. MTC 1.8 only supports migrations from OpenShift Container Platform 4.10 and later. Table 4.1. MTC compatibility: Migrating from a legacy or a modern platform Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.0 to 4.5 OpenShift Container Platform 4.6 to 4.9 OpenShift Container Platform 4.10 or later Stable MTC version MTC v.1.7. z MTC v.1.7. z MTC v.1.7. z MTC v.1.8. z Installation Legacy MTC v.1.7. z operator: Install manually with the operator.yml file. [ IMPORTANT ] This cluster cannot be the control cluster. Install with OLM, release channel release-v1.7 Install with OLM, release channel release-v1.8 Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC v.1.7. z , if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 4.2. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.14 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.14 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must create an Operator catalog from a mirror image in a local registry. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 4.3. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must have a Linux workstation with network access in order to download files from registry.redhat.io . You must create a mirror image of the Operator catalog. You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OpenShift Container Platform 4.14. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Obtain the Operator image mapping by running the following command: USD grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image. Example output registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file: containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 ... - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 ... env: - name: REGISTRY value: <registry.apps.example.com> 3 1 2 Specify your mirror registry and the sha256 value of the Operator image. 3 Specify your mirror registry. Log in to your OpenShift Container Platform source cluster. Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 4.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.14, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 4.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 4.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 4.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 4.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 4.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 4.4.2.1. NetworkPolicy configuration 4.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 4.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 4.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 4.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 4.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 4.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 4.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 4.5. Running Rsync as either root or non-root Important This section applies only when you are working with the OpenShift API, not the web console. OpenShift environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, MTC 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. Manually overriding default non-root operation for data transfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges prior to migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 4.5.1. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 4.5.2. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 4.6. Configuring a replication repository The Multicloud Object Gateway is the only supported option for a restricted network environment. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 4.6.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 4.6.2. Retrieving Multicloud Object Gateway credentials Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . 4.6.3. Additional resources Procedure Disconnected environment in the Red Hat OpenShift Data Foundation documentation. MTC workflow About data copy methods Adding a replication repository to the MTC web console 4.7. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
[ "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc", "registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator", "containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/migration_toolkit_for_containers/installing-mtc-restricted
Chapter 1. Red Hat Virtualization Architecture
Chapter 1. Red Hat Virtualization Architecture Red Hat Virtualization can be deployed as a self-hosted engine, or as a standalone Manager. A self-hosted engine is the recommended deployment option. 1.1. Self-Hosted Engine Architecture The Red Hat Virtualization Manager runs as a virtual machine on self-hosted engine nodes (specialized hosts) in the same environment it manages. A self-hosted engine environment requires one less physical server, but requires more administrative overhead to deploy and manage. The Manager is highly available without external HA management. The minimum setup of a self-hosted engine environment includes: One Red Hat Virtualization Manager virtual machine that is hosted on the self-hosted engine nodes. The RHV-M Appliance is used to automate the installation of a Red Hat Enterprise Linux 8 virtual machine, and the Manager on that virtual machine. A minimum of two self-hosted engine nodes for virtual machine high availability. You can use Red Hat Enterprise Linux hosts or Red Hat Virtualization Hosts (RHVH). VDSM (the host agent) runs on all hosts to facilitate communication with the Red Hat Virtualization Manager. The HA services run on all self-hosted engine nodes to manage the high availability of the Manager virtual machine. One storage service, which can be hosted locally or on a remote server, depending on the storage type used. The storage service must be accessible to all hosts. Figure 1.1. Self-Hosted Engine Red Hat Virtualization Architecture 1.2. Standalone Manager Architecture The Red Hat Virtualization Manager runs on a physical server, or a virtual machine hosted in a separate virtualization environment. A standalone Manager is easier to deploy and manage, but requires an additional physical server. The Manager is only highly available when managed externally with a product such as Red Hat's High Availability Add-On. The minimum setup for a standalone Manager environment includes: One Red Hat Virtualization Manager machine. The Manager is typically deployed on a physical server. However, it can also be deployed on a virtual machine, as long as that virtual machine is hosted in a separate environment. The Manager must run on Red Hat Enterprise Linux 8. A minimum of two hosts for virtual machine high availability. You can use Red Hat Enterprise Linux hosts or Red Hat Virtualization Hosts (RHVH). VDSM (the host agent) runs on all hosts to facilitate communication with the Red Hat Virtualization Manager. One storage service, which can be hosted locally or on a remote server, depending on the storage type used. The storage service must be accessible to all hosts. Figure 1.2. Standalone Manager Red Hat Virtualization Architecture
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/planning_and_prerequisites_guide/rhv_architecture
Chapter 8. Troubleshooting authentication with SSSD in IdM
Chapter 8. Troubleshooting authentication with SSSD in IdM Authentication in an Identity Management (IdM) environment involves many components: On the IdM client: The SSSD service. The Name Services Switch (NSS). Pluggable Authentication Modules (PAM). On the IdM server: The SSSD service. The IdM Directory Server. The IdM Kerberos Key Distribution Center (KDC). If you are authenticating as an Active Directory (AD) user: The Directory Server on an AD Domain Controller. The Kerberos server on an AD Domain Controller. To authenticate users, you must be able to perform the following functions with the SSSD service: Retrieve user information from the authentication server. Prompt the user for their credentials, pass those credentials to the authentication server, and process the outcome. To learn more about how information flows between the SSSD service and servers that store user information, so you can troubleshoot failing authentication attempts in your environment, see the following: Data flow when retrieving IdM user information with SSSD Data flow when retrieving AD user information with SSSD Data flow when authenticating as a user with SSSD in IdM Narrowing the scope of authentication issues SSSD log files and logging levels Enabling detailed logging for SSSD in the sssd.conf file Enabling detailed logging for SSSD with the sssctl command Gathering debugging logs from the SSSD service to troubleshoot authentication issues with an IdM server Gathering debugging logs from the SSSD service to troubleshoot authentication issues with an IdM client Tracking client requests in the SSSD backend Tracking client requests using the log analyzer tool 8.1. Data flow when retrieving IdM user information with SSSD The following diagram is a simplification of the information flow between an IdM client and an IdM server during a request for IdM user information with the command getent passwd <idm_user_name> . The getent command triggers the getpwnam call from the libc library. The libc library references the /etc/nsswitch.conf configuration file to check which service is responsible for providing user information, and discovers the entry sss for the SSSD service. The libc library opens the nss_sss module. The nss_sss module checks the memory-mapped cache for the user information. If the data is present in the cache, the nss_sss module returns it. If the user information is not in the memory-mapped cache, the request is passed to the SSSD sssd_nss responder process. The SSSD service checks its cache. If the data is present in the cache and valid, the sssd_nss responder reads the data from the cache and returns it to the application. If the data is not present in the cache or it is expired, the sssd_nss responder queries the appropriate back-end process and waits for a reply. The SSSD service uses the IPA backend in an IdM environment, enabled by the setting id_provider=ipa in the sssd.conf configuration file. The sssd_be back-end process connects to the IdM server and requests the information from the IdM LDAP Directory Server. The SSSD back-end on the IdM server responds to the SSSD back-end process on the IdM client. The SSSD back-end on the client stores the resulting data in the SSSD cache and alerts the responder process that the cache has been updated. The sssd_nss front-end responder process retrieves the information from the SSSD cache. The sssd_nss responder sends the user information to the nss_sss responder, completing the request. The libc library returns the user information to the application that requested it. 8.2. Data flow when retrieving AD user information with SSSD If you have established a cross-forest trust between your IdM environment and an Active Directory (AD) domain, the information flow when retrieving AD user information about an IdM client is very similar to the information flow when retrieving IdM user information, with the additional step of contacting the AD user database. The following diagram is a simplification of the information flow when a user requests information about an AD user with the command getent passwd <[email protected]> . This diagram does not include the internal details discussed in the Data flow when retrieving IdM user information with SSSD section. It focuses on the communication between the SSSD service on an IdM client, the SSSD service on an IdM server, and the LDAP database on an AD Domain Controller. The IdM client looks to its local SSSD cache for AD user information. If the IdM client does not have the user information, or the information is stale, the SSSD service on the client contacts the extdom_extop plugin on the IdM server to perform an LDAP extended operation and requests the information. The SSSD service on the IdM server looks for the AD user information in its local cache. If the IdM server does not have the user information in its SSSD cache, or its information is stale, it performs an LDAP search to request the user information from an AD Domain Controller. The SSSD service on the IdM server receives the AD user information from the AD domain controller and stores it in its cache. The extdom_extop plugin receives the information from the SSSD service on the IdM server, which completes the LDAP extended operation. The SSSD service on the IdM client receives the AD user information from the LDAP extended operation. The IdM client stores the AD user information in its SSSD cache and returns the information to the application that requested it. 8.3. Data flow when authenticating as a user with SSSD in IdM Authenticating as a user on an IdM server or client involves the following components: The service that initiates the authentication request, such as the sshd service. The Pluggable Authentication Module (PAM) library and its modules. The SSSD service, its responders, and back-ends. A smart card reader, if smart card authentication is configured. The authentication server: IdM users are authenticated against an IdM Kerberos Key Distribution Center (KDC). Active Directory (AD) users are authenticated against an AD Domain Controller (DC). The following diagram is a simplification of the information flow when a user needs to authenticate during an attempt to log in locally to a host via the SSH service on the command line. The authentication attempt with the ssh command triggers the libpam library. The libpam library references the PAM file in the /etc/pam.d/ directory that corresponds to the service requesting the authentication attempt. In this example involving authenticating via the SSH service on the local host, the libpam library checks the /etc/pam.d/system-auth configuration file and discovers the pam_sss.so entry for the SSSD PAM: To determine which authentication methods are available, the libpam library opens the pam_sss module and sends an SSS_PAM_PREAUTH request to the sssd_pam PAM responder of the SSSD service. If smart card authentication is configured, the SSSD service spawns a temporary p11_child process to check for a smart card and retrieve certificates from it. If smart card authentication is configured for the user, the sssd_pam responder attempts to match the certificate from the smart card with the user. The sssd_pam responder also performs a search for the groups that the user belongs to, since group membership might affect access control. The sssd_pam responder sends an SSS_PAM_PREAUTH request to the sssd_be back-end responder to see which authentication methods the server supports, such as passwords or 2-factor authentication. In an IdM environment, where the SSSD service uses the IPA responder, the default authentication method is Kerberos. For this example, the user authenticates with a simple Kerberos password. The sssd_be responder spawns a temporary krb5_child process. The krb5_child process contacts the KDC on the IdM server and checks for available authentication methods. The KDC responds to the request: The krb5_child process evaluates the reply and sends the results back to the sssd_be backend process. The sssd_be backend process receives the result. The sssd_pam responder receives the result. The pam_sss module receives the result. If password authentication is configured for the user, the pam_sss module prompts the user for their password. If smart card authentication is configured, the pam_sss module prompts the user for their smart card PIN. The module sends an SSS_PAM_AUTHENTICATE request with the user name and password, which travels to: The sssd_pam responder. The sssd_be back-end process. The sssd_be process spawns a temporary krb5_child process to contact the KDC. The krb5_child process attempts to retrieve a Kerberos Ticket Granting Ticket (TGT) from the KDC with the user name and password the user provided. The krb5_child process receives the result of the authentication attempt. The krb5_child process: Stores the TGT in a credential cache. Returns the authentication result to the sssd_be back-end process. The authentication result travels from the sssd_be process to: The sssd_pam responder. The pam_sss module. The pam_sss module sets an environment variable with the location of the user's TGT so other applications can reference it. 8.4. Narrowing the scope of authentication issues To successfully authenticate a user, you must be able to retrieve user information with the SSSD service from the database that stores user information. The following procedure describes steps to test different components of the authentication process so you can narrow the scope of authentication issues when a user is unable to log in. Procedure Verify that the SSSD service and its processes are running. Verify that the client can contact the user database server via the IP address. If this step fails, check that your network and firewall settings allow direct communication between IdM clients and servers. See Using and configuring firewalld . Verify that the client can discover and contact the IdM LDAP server (for IdM users) or AD domain controller (for AD users) via the fully qualified host name. If this step fails, check your Dynamic Name Service (DNS) settings, including the /etc/resolv.conf file. See Configuring the order of DNS servers . Note By default, the SSSD service attempts to automatically discover LDAP servers and AD DCs through DNS service (SRV) records. Alternatively, you can restrict the SSSD service to use specific servers by setting the following options in the sssd.conf configuration file: ipa_server = <fully_qualified_host_name_of_the_server> ad_server = <fully_qualified_host_name_of_the_server> ldap_uri = <fully_qualified_host_name_of_the_server> If you use these options, verify you can contact the servers listed in them. Verify that the client can authenticate to the LDAP server and retrieve user information with ldapsearch commands. If your LDAP server is an IdM server, like server.example.com , retrieve a Kerberos ticket for the host and perform the database search authenticating with the host Kerberos principal: If your LDAP server is an Active Directory (AD) Domain Controller (DC), like server.ad.example.com , retrieve a Kerberos ticket for the host and perform the database search authenticating with the host Kerberos principal: If your LDAP server is a plain LDAP server, and you have set the ldap_default_bind_dn and ldap_default_authtok options in the sssd.conf file, authenticate as the same ldap_default_bind_dn account: If this step fails, verify that your database settings allow your host to search the LDAP server. Since the SSSD service uses Kerberos encryption, verify you can obtain a Kerberos ticket as the user that is unable to log in. If your LDAP server is an IdM server: If LDAP server database is an AD server: If this step fails, verify that your Kerberos server is operating properly, all servers have their times synchronized, and that the user account is not locked. Verify you can retrieve user information about the command line. If this step fails, verify that the SSSD service on the client can receive information from the user database: Review errors in the /var/log/messages log file. Enable detailed logging in the SSSD service, collect debugging logs, and review the logs for indications to the source of the issue. Optional: Open a Red Hat Technical Support case and provide the troubleshooting information you have gathered. If you are allowed to run sudo on the host, use the sssctl utility to verify the user is allowed to log in. If this step fails, verify your authorization settings, such as your PAM configuration, IdM HBAC rules, and IdM RBAC rules: Ensure that the user's UID is equal to or higher than UID_MIN , which is defined in the /etc/login.defs file. Review authorization errors in the /var/log/secure and /var/log/messages log files. Enable detailed logging in the SSSD service, collect debugging logs, and review the logs for indications to the source of the issue. Optional: Open a Red Hat Technical Support case and provide the troubleshooting information you have gathered. Additional resources Enabling detailed logging for SSSD in the sssd.conf file Enabling detailed logging for SSSD with the sssctl command Gathering debugging logs from the SSSD service to troubleshoot authentication issues with an IdM server Gathering debugging logs from the SSSD service to troubleshoot authentication issues with an IdM client 8.5. SSSD log files and logging levels Each SSSD service logs into its own log file in the /var/log/sssd/ directory. For an IdM server in the example.com IdM domain, its log files might look like this: 8.5.1. SSSD log file purposes krb5_child.log Log file for the short-lived helper process involved in Kerberos authentication. ldap_child.log Log file for the short-lived helper process involved in getting a Kerberos ticket for the communication with the LDAP server. sssd_<example.com>.log For each domain section in the sssd.conf file, the SSSD service logs information about communication with the LDAP server to a separate log file. For example, in an environment with an IdM domain named example.com , the SSSD service logs its information in a file named sssd_example.com.log . If a host is directly integrated with an AD domain named ad.example.com , information is logged to a file named sssd_ad.example.com.log . Note If you have an IdM environment and a cross-forest trust with an AD domain, information about the AD domain is still logged to the log file for the IdM domain. Similarly, if a host is directly integrated to an AD domain, information about any child domains is written in the log file for the primary domain. selinux_child.log Log file for the short-lived helper process that retrieves and sets SELinux information. sssd.log Log file for SSSD monitoring and communicating with its responder and backend processes. sssd_ifp.log Log file for the InfoPipe responder, which provides a public D-Bus interface accessible over the system bus. sssd_nss.log Log file for the Name Services Switch (NSS) responder that retrieves user and group information. sssd_pac.log Log file for the Microsoft Privilege Attribute Certificate (PAC) responder, which collects the PAC from AD Kerberos tickets and derives information about AD users from the PAC, which avoids requesting it directly from AD. sssd_pam.log Log file for the Pluggable Authentication Module (PAM) responder. sssd_ssh.log Log file for the SSH responder process. 8.5.2. SSSD logging levels Setting a debug level also enables all debug levels below it. For example, setting the debug level at 6 also enables debug levels 0 through 5. Table 8.1. SSSD logging levels Level Description 0 Fatal failures . Errors that prevent the SSSD service from starting up or cause it to terminate. This is the default debug log level for RHEL 8.3 and earlier. 1 Critical failures . Errors that do not terminate the SSSD service, but at least one major feature is not working properly. 2 Serious failures . Errors announcing that a particular request or operation has failed. This is the default debug log level for RHEL 8.4 and later. 3 Minor failures . Errors that cause the operation failures captured at level 2. 4 Configuration settings. 5 Function data. 6 Trace messages for operation functions. 7 Trace messages for internal control functions. 8 Contents of function-internal variables. 9 Extremely low-level tracing information. 8.6. Enabling detailed logging for SSSD in the sssd.conf file By default, the SSSD service in RHEL 8.4 and later only logs serious failures (debug level 2), but it does not log at the level of detail necessary to troubleshoot authentication issues. To enable detailed logging persistently across SSSD service restarts, add the option debug_level= <integer> in each section of the /etc/sssd/sssd.conf configuration file, where the <integer> value is a number between 0 and 9. Debug levels up to 3 log larger failures, and levels 8 and higher provide a large number of detailed log messages. Level 6 is a good starting point for debugging authentication issues. Prerequisites You need the root password to edit the sssd.conf configuration file and restart the SSSD service. Procedure Open the /etc/sssd/sssd.conf file in a text editor. Add the debug_level option to every section of the file, and set the debug level to the verbosity of your choice. Save and close the sssd.conf file. Restart the SSSD service to load the new configuration settings. Additional resources SSSD log files and logging levels 8.7. Enabling detailed logging for SSSD with the sssctl command By default, the SSSD service in RHEL 8.4 and later only logs serious failures (debug level 2), but it does not log at the level of detail necessary to troubleshoot authentication issues. You can change the debug level of the SSSD service on the command line with the sssctl debug-level <integer> command, where the <integer> value is a number between 0 and 9. Debug levels up to 3 log larger failures, and levels 8 and higher provide a large number of detailed log messages. Level 6 is a good starting point for debugging authentication issues. Prerequisites You need the root password to run the sssctl command. Procedure Use the sssctl debug-level command to set the debug level of your choiceto your desired verbosity. Additional resources SSSD log files and logging levels 8.8. Gathering debugging logs from the SSSD service to troubleshoot authentication issues with an IdM server If you experience issues when attempting to authenticate as an IdM user to an IdM server, enable detailed debug logging in the SSSD service on the server and gather logs of an attempt to retrieve information about the user. Prerequisites You need the root password to run the sssctl command and restart the SSSD service. Procedure Enable detailed SSSD debug logging on the IdM server. Invalidate objects in the SSSD cache for the user that is experiencing authentication issues, so you do not bypass the LDAP server and retrieve information SSSD has already cached. Minimize the troubleshooting dataset by removing older SSSD logs. Attempt to switch to the user experiencing authentication problems, while gathering timestamps before and after the attempt. These timestamps further narrow the scope of the dataset. Optional: Lower the debug level if you do not wish to continue gathering detailed SSSD logs. Review SSSD logs for information about the failed request. For example, reviewing the /var/log/sssd/sssd_example.com.log file shows that the SSSD service did not find the user in the cn=accounts,dc=example,dc=com LDAP subtree. This might indicate that the user does not exist, or exists in another location. If you are unable to determine the cause of the authentication issue: Collect the SSSD logs you recently generated. Open a Red Hat Technical Support case and provide: The SSSD logs: sssd-logs-Mar29.tar The console output, including the time stamps and user name, of the request that corresponds to the logs: 8.9. Gathering debugging logs from the SSSD service to troubleshoot authentication issues with an IdM client If you experience issues when attempting to authenticate as an IdM user to an IdM client, verify that you can retrieve user information about the IdM server. If you cannot retrieve the user information about an IdM server, you will not be able to retrieve it on an IdM client (which retrieves information from the IdM server). After you have confirmed that authentication issues do not originate from the IdM server, gather SSSD debugging logs from both the IdM server and IdM client. Prerequisites The user only has authentication issues on IdM clients, not IdM servers. You need the root password to run the sssctl command and restart the SSSD service. Procedure On the client: Open the /etc/sssd/sssd.conf file in a text editor. On the client: Add the ipa_server option to the [domain] section of the file and set it to an IdM server. This avoids the IdM client autodiscovering other IdM servers, thus limiting this test to just one client and one server. On the client: Save and close the sssd.conf file. On the client: Restart the SSSD service to load the configuration changes. On the server and client: Enable detailed SSSD debug logging. On the server and client: Invalidate objects in the SSSD cache for the user experiencing authentication issues, so you do not bypass the LDAP database and retrieve information SSSD has already cached. On the server and client: Minimize the troubleshooting dataset by removing older SSSD logs. On the client: Attempt to switch to the user experiencing authentication problems while gathering timestamps before and after the attempt. These timestamps further narrow the scope of the dataset. Optional: On the server and client: Lower the debug level if you do not wish to continue gathering detailed SSSD logs. On the server and client: Review SSSD logs for information about the failed request. Review the request from the client in the client logs. Review the request from the client in the server logs. Review the result of the request in the server logs. Review the outcome of the client receiving the results of the request from the server. If you are unable to determine the cause of the authentication issue: Collect the SSSD logs you recently generated on the IdM server and IdM client. Label them according to their hostname or role. Open a Red Hat Technical Support case and provide: The SSSD debug logs: sssd-logs-server-Mar29.tar from the server sssd-logs-client-Mar29.tar from the client The console output, including the time stamps and user name, of the request that corresponds to the logs: 8.10. Tracking client requests in the SSSD backend SSSD processes requests asynchronously and as messages from different requests are added to the same log file, you can use the unique request identifier and client ID to track client requests in the back-end logs. The unique request identifier is added to the debug logs in the form of RID#<integer> and the client ID in the form [CID #<integer] . This allows you to isolate logs pertaining to an individual request, and you can track requests from start to finish across log files from multiple SSSD components. Prerequisites You have enabled debug logging and a request has been submitted from an IdM client. You must have root privileges to display the contents of the SSSD log files. Procedure To review your SSSD log file, open the log file using the less utility. For example, to view the /var/log/sssd/sssd_example.com.log : Review the SSSD logs for information about the client request. This sample output from an SSSD log file shows the unique identifiers RID#3 and RID#4 for two different requests. However, a single client request to the SSSD client interface often triggers multiple requests in the backend and as a result it is not a 1-to-1 correlation between client request and requests in the backend. Though the multiple requests in the backend have different RID numbers, each initial backend request includes the unique client ID so an administrator can track the multiple RID numbers to the single client request. The following example shows one client request [sssd.nss CID #1] and the multiple requests generated in the backend, [RID#5] to [RID#13] : 8.11. Tracking client requests using the log analyzer tool The System Security Services Daemon (SSSD) includes a log parsing tool that can be used to track requests from start to finish across log files from multiple SSSD components. 8.11.1. How the log analyzer tool works Using the log parsing tool, you can track SSSD requests from start to finish across log files from multiple SSSD components. You run the analyzer tool using the sssctl analyze command. The log analyzer tool helps you to troubleshoot NSS and PAM issues in SSSD and more easily review SSSD debug logs. You can extract and print SSSD logs related only to certain client requests across SSSD processes. SSSD tracks user and group identity information ( id , getent ) separately from user authentication ( su , ssh ) information. The client ID (CID) in the NSS responder is independent of the CID in the PAM responder and you see overlapping numbers when analyzing NSS and PAM requests. Use the --pam option with the sssctl analyze command to review PAM requests. Note Requests returned from the SSSD memory cache are not logged and cannot be tracked by the log analyzer tool. Additional resources sudo sssctl analyze request --help sudo sssctl analyze --help sssd.conf and sssctl man pages on your system 8.11.2. Running the log analyzer tool Follow this procedure to use the log analyzer tool to track client requests in SSSD. Prerequisites You must set debug_level to at least 7 in the [USDresponder] section, and [domain/USDdomain] section of the /etc/sssd/sssd.conf file to enable log parsing functionality. Logs to analyze must be from a compatible version of SSSD built with libtevent chain ID support, that is SSSD in RHEL 8.5 and later. Procedure Run the log analyzer tool in list mode to determine the client ID of the request you are tracking, adding the -v option to display verbose output: A verbose list of recent client requests made to SSSD is displayed. Note If analyzing PAM requests, run the sssctl analyze request list command with the --pam option. Run the log analyzer tool with the show [unique client ID] option to display logs pertaining to the specified client ID number: If required, you can run the log analyzer tool against log files, for example: Additional resources sssctl analyze request list --help sssctl analyze request show --help sssctl man page on your system 8.12. Additional resources General SSSD Debugging Procedures (Red Hat Knowledgebase)
[ "auth sufficient pam_sss.so", "pstree -a | grep sssd |-sssd -i --logger=files | |-sssd_be --domain implicit_files --uid 0 --gid 0 --logger=files | |- sssd_be --domain example.com --uid 0 --gid 0 --logger=files | |-sssd_ifp --uid 0 --gid 0 --logger=files | |- sssd_nss --uid 0 --gid 0 --logger=files | |-sssd_pac --uid 0 --gid 0 --logger=files | |- sssd_pam --uid 0 --gid 0 --logger=files | |- sssd_ssh --uid 0 --gid 0 --logger=files | `-sssd_sudo --uid 0 --gid 0 --logger=files |-sssd_kcm --uid 0 --gid 0 --logger=files", "[user@client ~]USD ping <IP_address_of_the_database_server>", "[user@client ~]USD dig -t SRV _ldap._tcp.example.com @<name_server> [user@client ~]USD ping <fully_qualified_host_name_of_the_server>", "[user@client ~]USD kinit -k 'host/[email protected]' [user@client ~]USD ldapsearch -LLL -Y GSSAPI -h server.example.com -b \"dc=example,dc=com\" uid= <user_name>", "[user@client ~]USD kinit -k '[email protected]' [user@client ~]USD ldapsearch -LLL -Y GSSAPI -h server.ad.example.com -b \"dc=example,dc=com\" sAMAccountname= <user_name>", "[user@client ~]USD ldapsearch -xLLL -D \"cn=ldap_default_bind_dn_value\" -W -h ldapserver.example.com -b \"dc=example,dc=com\" uid= <user_name>", "[user@client ~]USD kinit <user_name>", "[user@client ~]USD kinit <[email protected]>", "[user@client ~]USD getent passwd <user_name> [user@client ~]USD id <user_name>", "[user@client ~]USD sudo sssctl user-checks -a auth -s ssh <user_name>", "ls -l /var/log/sssd/ total 620 -rw-------. 1 root root 0 Mar 29 09:21 krb5_child.log -rw-------. 1 root root 14324 Mar 29 09:50 ldap_child.log -rw-------. 1 root root 212870 Mar 29 09:50 sssd_example.com.log -rw-------. 1 root root 0 Mar 29 09:21 sssd_ifp.log -rw-------. 1 root root 0 Mar 29 09:21 sssd_implicit_files.log -rw-------. 1 root root 0 Mar 29 09:21 sssd.log -rw-------. 1 root root 219873 Mar 29 10:03 sssd_nss.log -rw-------. 1 root root 0 Mar 29 09:21 sssd_pac.log -rw-------. 1 root root 13105 Mar 29 09:21 sssd_pam.log -rw-------. 1 root root 9390 Mar 29 09:21 sssd_ssh.log -rw-------. 1 root root 0 Mar 29 09:21 sssd_sudo.log", "[domain/example.com] debug_level = 6 id_provider = ipa [sssd] debug_level = 6 services = nss, pam, ifp, ssh, sudo domains = example.com [nss] debug_level = 6 [pam] debug_level = 6 [sudo] debug_level = 6 [ssh] debug_level = 6 [pac] debug_level = 6 [ifp] debug_level = 6", "systemctl restart sssd", "sssctl debug-level 6", "sssctl debug-level 6", "sssctl cache-expire -u idmuser", "sssctl logs-remove", "date; su idmuser ; date Mon Mar 29 15:33:48 EDT 2021 su: user idmuser does not exist Mon Mar 29 15:33:49 EDT 2021", "sssctl debug-level 2", "(Mon Mar 29 15:33:48 2021) [sssd[be[example.com]]] [dp_get_account_info_send] (0x0200): Got request for [0x1][BE_REQ_USER][ [email protected] ] (Mon Mar 29 15:33:48 2021) [sssd[be[example.com]]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [(&(uid=idmuser)(objectclass=posixAccount)(uid= )(&(uidNumber= )(!(uidNumber=0))))][cn=accounts,dc=example,dc=com]. (Mon Mar 29 15:33:48 2021) [sssd[be[example.com]]] [sdap_get_generic_op_finished] (0x0400): Search result: Success(0), no errmsg set (Mon Mar 29 15:33:48 2021) [sssd[be[example.com]]] [sdap_search_user_process] (0x0400): Search for users, returned 0 results. (Mon Mar 29 15:33:48 2021) [sssd[be[example.com]]] [sysdb_search_by_name] (0x0400): No such entry (Mon Mar 29 15:33:48 2021) [sssd[be[example.com]]] [sysdb_delete_user] (0x0400): Error: 2 (No such file or directory) (Mon Mar 29 15:33:48 2021) [sssd[be[example.com]]] [sysdb_search_by_name] (0x0400): No such entry (Mon Mar 29 15:33:49 2021) [sssd[be[example.com]]] [ipa_id_get_account_info_orig_done] (0x0080): Object not found, ending request", "sssctl logs-fetch sssd-logs-Mar29.tar", "date; id idmuser; date Mon Mar 29 15:33:48 EDT 2021 id: 'idmuser': no such user Mon Mar 29 15:33:49 EDT 2021", "[domain/example.com] ipa_server = server.example.com", "systemctl restart sssd", "sssctl debug-level 6", "sssctl debug-level 6", "sssctl cache-expire -u idmuser", "sssctl cache-expire -u idmuser", "sssctl logs-remove", "sssctl logs-remove", "date; su idmuser; date Mon Mar 29 16:20:13 EDT 2021 su: user idmuser does not exist Mon Mar 29 16:20:14 EDT 2021", "sssctl debug-level 0", "sssctl debug-level 0", "sssctl logs-fetch sssd-logs-server-Mar29.tar", "sssctl logs-fetch sssd-logs-client-Mar29.tar", "date; su idmuser ; date Mon Mar 29 16:20:13 EDT 2021 su: user idmuser does not exist Mon Mar 29 16:20:14 EDT 2021", "less /var/log/sssd/sssd_example.com.log", "(2021-07-26 18:26:37): [be[testidm.com]] [dp_req_destructor] (0x0400): [RID#3] Number of active DP request: 0 (2021-07-26 18:26:37): [be[testidm.com]] [dp_req_reply_std] (0x1000): [RID#3] DP Request AccountDomain #3: Returning [Internal Error]: 3,1432158301,GetAccountDomain() not supported (2021-07-26 18:26:37): [be[testidm.com]] [dp_attach_req] (0x0400): [RID#4] DP Request Account #4: REQ_TRACE: New request. [sssd.nss CID #1] Flags [0x0001]. (2021-07-26 18:26:37): [be[testidm.com]] [dp_attach_req] (0x0400): [RID#4] Number of active DP request: 1", "(2021-10-29 13:24:16): [be[ad.vm]] [dp_attach_req] (0x0400): [RID#5] DP Request [Account #5]: REQ_TRACE: New request. [sssd.nss CID #1] Flags [0x0001]. (2021-10-29 13:24:16): [be[ad.vm]] [dp_attach_req] (0x0400): [RID#6] DP Request [AccountDomain #6]: REQ_TRACE: New request. [sssd.nss CID #1] Flags [0x0001]. (2021-10-29 13:24:16): [be[ad.vm]] [dp_attach_req] (0x0400): [RID#7] DP Request [Account #7]: REQ_TRACE: New request. [sssd.nss CID #1] Flags [0x0001]. (2021-10-29 13:24:17): [be[ad.vm]] [dp_attach_req] (0x0400): [RID#8] DP Request [Initgroups #8]: REQ_TRACE: New request. [sssd.nss CID #1] Flags [0x0001]. (2021-10-29 13:24:17): [be[ad.vm]] [dp_attach_req] (0x0400): [RID#9] DP Request [Account #9]: REQ_TRACE: New request. [sssd.nss CID #1] Flags [0x0001]. (2021-10-29 13:24:17): [be[ad.vm]] [dp_attach_req] (0x0400): [RID#10] DP Request [Account #10]: REQ_TRACE: New request. [sssd.nss CID #1] Flags [0x0001]. (2021-10-29 13:24:17): [be[ad.vm]] [dp_attach_req] (0x0400): [RID#11] DP Request [Account #11]: REQ_TRACE: New request. [sssd.nss CID #1] Flags [0x0001]. (2021-10-29 13:24:17): [be[ad.vm]] [dp_attach_req] (0x0400): [RID#12] DP Request [Account #12]: REQ_TRACE: New request. [sssd.nss CID #1] Flags [0x0001]. (2021-10-29 13:24:17): [be[ad.vm]] [dp_attach_req] (0x0400): [RID#13] DP Request [Account #13]: REQ_TRACE: New request. [sssd.nss CID #1] Flags [0x0001].", "sssctl analyze request list -v", "sssctl analyze request show 20", "sssctl analyze request --logdir=/tmp/var/log/sssd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/assembly_troubleshooting-authentication-with-sssd-in-idm_configuring-and-managing-idm
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_microsoft_azure/providing-feedback-on-red-hat-documentation_azure
Chapter 1. Authorization of web endpoints
Chapter 1. Authorization of web endpoints Quarkus incorporates a pluggable web security layer. When security is active, the system performs a permission check on all HTTP requests to determine if they should proceed. Using @PermitAll will not open a path if the path is restricted by the quarkus.http.auth. configuration. To ensure specific paths are accessible, appropriate configurations must be made within the Quarkus security settings. Note If you use Jakarta RESTful Web Services, consider using quarkus.security.jaxrs.deny-unannotated-endpoints or quarkus.security.jaxrs.default-roles-allowed to set default security requirements instead of HTTP path-level matching because annotations can override these properties on an individual endpoint. Authorization is based on user roles that the security provider provides. To customize these roles, a SecurityIdentityAugmentor can be created, see Security Identity Customization . 1.1. Authorization using configuration Permissions are defined in the Quarkus configuration by permission sets, each specifying a policy for access control. Table 1.1. Red Hat build of Quarkus policies summary Built-in policy Description deny This policy denies all users. permit This policy permits all users. authenticated This policy permits only authenticated users. You can define role-based policies that allow users with specific roles to access the resources. Example of a role-based policy quarkus.http.auth.policy.role-policy1.roles-allowed=user,admin 1 1 This defines a role-based policy that allows users with the user and admin roles. You can reference a custom policy by configuring the built-in permission sets that are defined in the application.properties file, as outlined in the following configuration example: Example of policy configuration quarkus.http.auth.permission.permit1.paths=/public/* 1 quarkus.http.auth.permission.permit1.policy=permit quarkus.http.auth.permission.permit1.methods=GET quarkus.http.auth.permission.deny1.paths=/forbidden 2 quarkus.http.auth.permission.deny1.policy=deny quarkus.http.auth.permission.roles1.paths=/roles-secured/*,/other/*,/api/* 3 quarkus.http.auth.permission.roles1.policy=role-policy1 1 This permission references the default built-in permit policy to allow GET methods to /public . In this case, the demonstrated setting would not affect this example because this request is allowed anyway. 2 This permission references the built-in deny policy for /forbidden . It is an exact path match because it does not end with * . 3 This permission set references the previously defined policy. roles1 is an example name; you can call the permission sets whatever you want. Warning The exact path /forbidden in the example will not secure the /forbidden/ path. It is necessary to add a new exact path for the /forbidden/ path to ensure proper security coverage. 1.1.1. Custom HttpSecurityPolicy Sometimes it might be useful to register your own named policy. You can get it done by creating application scoped CDI bean that implements the io.quarkus.vertx.http.runtime.security.HttpSecurityPolicy interface like in the example below: import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.security.identity.SecurityIdentity; import io.quarkus.vertx.http.runtime.security.HttpSecurityPolicy; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomNamedHttpSecPolicy implements HttpSecurityPolicy { @Override public Uni<CheckResult> checkPermission(RoutingContext event, Uni<SecurityIdentity> identity, AuthorizationRequestContext requestContext) { if (customRequestAuthorization(event)) { return Uni.createFrom().item(CheckResult.PERMIT); } return Uni.createFrom().item(CheckResult.DENY); } @Override public String name() { return "custom"; 1 } private static boolean customRequestAuthorization(RoutingContext event) { // here comes your own security check return !event.request().path().endsWith("denied"); } } 1 Named HTTP Security policy will only be applied to requests matched by the application.properties path matching rules. Example of custom named HttpSecurityPolicy referenced from configuration file quarkus.http.auth.permission.custom1.paths=/custom/* quarkus.http.auth.permission.custom1.policy=custom 1 1 Custom policy name must match the value returned by the io.quarkus.vertx.http.runtime.security.HttpSecurityPolicy.name method. Tip You can also create global HttpSecurityPolicy invoked on every request. Just do not implement the io.quarkus.vertx.http.runtime.security.HttpSecurityPolicy.name method and leave the policy nameless. 1.1.2. Matching on paths and methods Permission sets can also specify paths and methods as a comma-separated list. If a path ends with the * wildcard, the query it generates matches all sub-paths. Otherwise, it queries for an exact match and only matches that specific path: quarkus.http.auth.permission.permit1.paths=/public*,/css/*,/js/*,/robots.txt 1 quarkus.http.auth.permission.permit1.policy=permit quarkus.http.auth.permission.permit1.methods=GET,HEAD 1 The * wildcard at the end of the path matches zero or more path segments, but never any word starting from the /public path. For that reason, a path like /public-info is not matched by this pattern. 1.1.3. Matching a path but not a method The request is rejected if it matches one or more permission sets based on the path but none of the required methods. Tip Given the preceding permission set, GET /public/foo would match both the path and method and therefore be allowed. In contrast, POST /public/foo would match the path but not the method, and, therefore, be rejected. 1.1.4. Matching multiple paths: longest path wins Matching is always done on the "longest path wins" basis. Less specific permission sets are not considered if a more specific one has been matched: quarkus.http.auth.permission.permit1.paths=/public/* quarkus.http.auth.permission.permit1.policy=permit quarkus.http.auth.permission.permit1.methods=GET,HEAD quarkus.http.auth.permission.deny1.paths=/public/forbidden-folder/* quarkus.http.auth.permission.deny1.policy=deny Tip Given the preceding permission set, GET /public/forbidden-folder/foo would match both permission sets' paths. However, because the longer path matches the path of the deny1 permission set, deny1 is chosen, and the request is rejected. Note Subpath permissions precede root path permissions, as the deny1 versus permit1 permission example previously illustrated. This rule is further exemplified by a scenario where subpath permission allows access to a public resource while the root path permission necessitates authorization. quarkus.http.auth.policy.user-policy.roles-allowed=user quarkus.http.auth.permission.roles.paths=/api/* quarkus.http.auth.permission.roles.policy=user-policy quarkus.http.auth.permission.public.paths=/api/noauth/* quarkus.http.auth.permission.public.policy=permit 1.1.5. Matching multiple sub-paths: longest path to the * wildcard wins examples demonstrated matching all sub-paths when a path concludes with the * wildcard. This wildcard also applies in the middle of a path, representing a single path segment. It cannot be mixed with other path segment characters; thus, path separators always enclose the * wildcard, as seen in the /public/ * /about-us path. When several path patterns correspond to the same request path, the system selects the longest sub-path leading to the * wildcard. In this context, every path segment character is more specific than the * wildcard. Here is a simple example: quarkus.http.auth.permission.secured.paths=/api/*/detail 1 quarkus.http.auth.permission.secured.policy=authenticated quarkus.http.auth.permission.public.paths=/api/public-product/detail 2 quarkus.http.auth.permission.public.policy=permit 1 Request paths like /api/product/detail can only be accessed by authenticated users. 2 The path /api/public-product/detail is more specific, therefore accessible by anyone. Important All paths secured with the authorization using configuration should be tested. Writing path patterns with multiple wildcards can be cumbersome. Please make sure paths are authorized as you intended. In the following example, paths are ordered from the most specific to the least specific one: Request path /one/two/three/four/five matches ordered from the most specific to the least specific path /one/two/three/four/five /one/two/three/four/* /one/two/three/*/five /one/two/three/*/* /one/two/*/four/five /one/*/three/four/five /*/two/three/four/five /*/two/three/*/five /* Important The * wildcard at the end of the path matches zero or more path segments. The * wildcard placed anywhere else matches exactly one path segment. 1.1.6. Matching multiple paths: most specific method wins When a path is registered with multiple permission sets, the permission sets explicitly specifying an HTTP method that matches the request take precedence. In this instance, the permission sets without methods only come into effect if the request method does not match permission sets with the method specification. quarkus.http.auth.permission.permit1.paths=/public/* quarkus.http.auth.permission.permit1.policy=permit quarkus.http.auth.permission.permit1.methods=GET,HEAD quarkus.http.auth.permission.deny1.paths=/public/* quarkus.http.auth.permission.deny1.policy=deny Note The preceding permission set shows that GET /public/foo matches the paths of both permission sets.However, it specifically aligns with the explicit method of the permit1 permission set.Therefore, permit1 is selected, and the request is accepted. In contrast, PUT /public/foo does not match the method permissions of permit1 . As a result, deny1 is activated, leading to the rejection of the request. 1.1.7. Matching multiple paths and methods: both win Sometimes, the previously described rules allow multiple permission sets to win simultaneously. In that case, for the request to proceed, all the permissions must allow access. For this to happen, both must either have specified the method or have no method. Method-specific matches take precedence. quarkus.http.auth.policy.user-policy1.roles-allowed=user quarkus.http.auth.policy.admin-policy1.roles-allowed=admin quarkus.http.auth.permission.roles1.paths=/api/*,/restricted/* quarkus.http.auth.permission.roles1.policy=user-policy1 quarkus.http.auth.permission.roles2.paths=/api/*,/admin/* quarkus.http.auth.permission.roles2.policy=admin-policy1 Tip Given the preceding permission set, GET /api/foo would match both permission sets' paths, requiring both the user and admin roles. 1.1.8. Configuration properties to deny access The following configuration settings alter the role-based access control (RBAC) denying behavior: quarkus.security.jaxrs.deny-unannotated-endpoints=true|false If set to true, access is denied for all Jakarta REST endpoints by default. If a Jakarta REST endpoint has no security annotations, it defaults to the @DenyAll behavior. This helps you to avoid accidentally exposing an endpoint that is supposed to be secured. Defaults to false . quarkus.security.jaxrs.default-roles-allowed=role1,role2 Defines the default role requirements for unannotated endpoints. The ** role is a special role that means any authenticated user. This cannot be combined with deny-unannotated-endpoints because deny takes effect instead. quarkus.security.deny-unannotated-members=true|false If set to true, the access is denied to all CDI methods and Jakarta REST endpoints that do not have security annotations but are defined in classes that contain methods with security annotations. Defaults to false . 1.1.9. Disabling permissions Permissions can be disabled at build time with an enabled property for each declared permission, such as: quarkus.http.auth.permission.permit1.enabled=false quarkus.http.auth.permission.permit1.paths=/public/*,/css/*,/js/*,/robots.txt quarkus.http.auth.permission.permit1.policy=permit quarkus.http.auth.permission.permit1.methods=GET,HEAD Permissions can be reenabled at runtime with a system property or environment variable, such as: -Dquarkus.http.auth.permission.permit1.enabled=true . 1.1.10. Permission paths and HTTP root path The quarkus.http.root-path configuration property changes the http endpoint context path . By default, quarkus.http.root-path is prepended automatically to configured permission paths then do not use a forward slash, for example: quarkus.http.auth.permission.permit1.paths=public/*,css/*,js/*,robots.txt This configuration is equivalent to the following: quarkus.http.auth.permission.permit1.paths=USD{quarkus.http.root-path}/public/*,USD{quarkus.http.root-path}/css/*,USD{quarkus.http.root-path}/js/*,USD{quarkus.http.root-path}/robots.txt A leading slash changes how the configured permission path is interpreted. The configured URL is used as-is, and paths are not adjusted if the value of quarkus.http.root-path changes. Example: quarkus.http.auth.permission.permit1.paths=/public/*,css/*,js/*,robots.txt This configuration only impacts resources served from the fixed or static URL, /public , which might not match your application resources if quarkus.http.root-path has been set to something other than / . For more information, see Path Resolution in Quarkus . 1.1.11. Map SecurityIdentity roles Winning role-based policy can map the SecurityIdentity roles to the deployment-specific roles. These roles are then applicable for endpoint authorization by using the @RolesAllowed annotation. quarkus.http.auth.policy.admin-policy1.roles.admin=Admin1 1 quarkus.http.auth.permission.roles1.paths=/* quarkus.http.auth.permission.roles1.policy=admin-policy1 1 Map the admin role to Admin1 role. The SecurityIdentity will have both admin and Admin1 roles. 1.1.12. Shared permission checks One important rule for unshared permission checks is that only one path match is applied, the most specific one. Naturally you can specify as many permissions with the same winning path as you want and they will all be applied. However, there can be permission checks you want to apply to many paths without repeating them over and over again. That's where shared permission checks come in, they are always applied when the permission path is matched. Example of custom named HttpSecurityPolicy applied on every HTTP request quarkus.http.auth.permission.custom1.paths=/* quarkus.http.auth.permission.custom1.shared=true 1 quarkus.http.auth.permission.custom1.policy=custom quarkus.http.auth.policy.admin-policy1.roles-allowed=admin quarkus.http.auth.permission.roles1.paths=/admin/* quarkus.http.auth.permission.roles1.policy=admin-policy1 1 Custom HttpSecurityPolicy will be also applied on the /admin/1 path together with the admin-policy1 policy. Tip Configuring many shared permission checks is less effective than configuring unshared ones. Use shared permissions to complement unshared permission checks like in the example below. Map SecurityIdentity roles with shared permission quarkus.http.auth.policy.role-policy1.roles.root=admin,user 1 quarkus.http.auth.permission.roles1.paths=/secured/* 2 quarkus.http.auth.permission.roles1.policy=role-policy1 quarkus.http.auth.permission.roles1.shared=true quarkus.http.auth.policy.role-policy2.roles-allowed=user 3 quarkus.http.auth.permission.roles2.paths=/secured/user/* quarkus.http.auth.permission.roles2.policy=role-policy2 quarkus.http.auth.policy.role-policy3.roles-allowed=admin quarkus.http.auth.permission.roles3.paths=/secured/admin/* quarkus.http.auth.permission.roles3.policy=role-policy3 1 Role root will be able to access /secured/user/* and /secured/admin/* paths. 2 The /secured/* path can only be accessed by authenticated users. This way, you have secured the /secured/all path and so on. 3 Shared permissions are always applied before unshared ones, therefore a SecurityIdentity with the root role will have the user role as well. 1.2. Authorization using annotations Red Hat build of Quarkus includes built-in security to allow for Role-Based Access Control (RBAC) based on the common security annotations @RolesAllowed , @DenyAll , @PermitAll on REST endpoints and CDI beans. Table 1.2. Red Hat build of Quarkus annotation types summary Annotation type Description @DenyAll Specifies that no security roles are allowed to invoke the specified methods. @PermitAll Specifies that all security roles are allowed to invoke the specified methods. @PermitAll lets everybody in, even without authentication. @RolesAllowed Specifies the list of security roles allowed to access methods in an application. As an equivalent to @RolesAllowed("**") , Red Hat build of Quarkus also provides the io.quarkus.security.Authenticated annotation that permits any authenticated user to access the resource. The following SubjectExposingResource example demonstrates an endpoint that uses both Jakarta REST and Common Security annotations to describe and secure its endpoints. SubjectExposingResource example import java.security.Principal; import jakarta.annotation.security.RolesAllowed; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.core.Context; import jakarta.ws.rs.core.SecurityContext; @Path("subject") public class SubjectExposingResource { @GET @Path("secured") @RolesAllowed("Tester") 1 public String getSubjectSecured(@Context SecurityContext sec) { Principal user = sec.getUserPrincipal(); 2 String name = user != null ? user.getName() : "anonymous"; return name; } @GET @Path("unsecured") @PermitAll 3 public String getSubjectUnsecured(@Context SecurityContext sec) { Principal user = sec.getUserPrincipal(); 4 String name = user != null ? user.getName() : "anonymous"; return name; } @GET @Path("denied") @DenyAll 5 public String getSubjectDenied(@Context SecurityContext sec) { Principal user = sec.getUserPrincipal(); String name = user != null ? user.getName() : "anonymous"; return name; } } 1 The /subject/secured endpoint requires an authenticated user with the granted "Tester" role through the use of the @RolesAllowed("Tester") annotation. 2 The endpoint obtains the user principal from the Jakarta REST SecurityContext . This returns non-null for a secured endpoint. 3 The /subject/unsecured endpoint allows for unauthenticated access by specifying the @PermitAll annotation. 4 The call to obtain the user principal returns null if the caller is unauthenticated and non-null if the caller is authenticated. 5 The /subject/denied endpoint declares the @DenyAll annotation, disallowing all direct access to it as a REST method, regardless of the user calling it. The method is still invokable internally by other methods in this class. Caution If you plan to use standard security annotations on the IO thread, review the information in Proactive Authentication . The @RolesAllowed annotation value supports property expressions including default values and nested property expressions. Configuration properties used with the annotation are resolved at runtime. Table 1.3. Annotation value examples Annotation Value explanation @RolesAllowed("USD{admin-role}") The endpoint allows users with the role denoted by the value of the admin-role property. @RolesAllowed("USD{tester.group}-USD{tester.role}") An example showing that the value can contain multiple variables. @RolesAllowed("USD{customer:User}") A default value demonstration. The required role is denoted by the value of the customer property. However, if that property is not specified, a role named User is required as a default. Example of a property expressions usage in the @RolesAllowed annotation admin=Administrator tester.group=Software tester.role=Tester %prod.secured=User %dev.secured=** all-roles=Administrator,Software,Tester,User Subject access control example import java.security.Principal; import jakarta.annotation.security.DenyAll; import jakarta.annotation.security.PermitAll; import jakarta.annotation.security.RolesAllowed; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.core.Context; import jakarta.ws.rs.core.SecurityContext; @Path("subject") public class SubjectExposingResource { @GET @Path("admin") @RolesAllowed("USD{admin}") 1 public String getSubjectSecuredAdmin(@Context SecurityContext sec) { return getUsername(sec); } @GET @Path("software-tester") @RolesAllowed("USD{tester.group}-USD{tester.role}") 2 public String getSubjectSoftwareTester(@Context SecurityContext sec) { return getUsername(sec); } @GET @Path("user") @RolesAllowed("USD{customer:User}") 3 public String getSubjectUser(@Context SecurityContext sec) { return getUsername(sec); } @GET @Path("secured") @RolesAllowed("USD{secured}") 4 public String getSubjectSecured(@Context SecurityContext sec) { return getUsername(sec); } @GET @Path("list") @RolesAllowed("USD{all-roles}") 5 public String getSubjectList(@Context SecurityContext sec) { return getUsername(sec); } private String getUsername(SecurityContext sec) { Principal user = sec.getUserPrincipal(); String name = user != null ? user.getName() : "anonymous"; return name; } } 1 The @RolesAllowed annotation value is set to the value of Administrator . 2 This /subject/software-tester endpoint requires an authenticated user that has been granted the role of "Software-Tester". It is possible to use multiple expressions in the role definition. 3 This /subject/user endpoint requires an authenticated user that has been granted the role "User" through the use of the @RolesAllowed("USD{customer:User}") annotation because we did not set the configuration property customer . 4 In production, this /subject/secured endpoint requires an authenticated user with the User role. In development mode, it allows any authenticated user. 5 Property expression all-roles will be treated as a collection type List , therefore, the endpoint will be accessible for roles Administrator , Software , Tester and User . 1.2.1. Permission annotation Quarkus also provides the io.quarkus.security.PermissionsAllowed annotation, which authorizes any authenticated user with the given permission to access the resource. This annotation is an extension of the common security annotations and checks the permissions granted to a SecurityIdentity instance. Example of endpoints secured with the @PermissionsAllowed annotation package org.acme.crud; import io.quarkus.arc.Arc; import io.vertx.ext.web.RoutingContext; import jakarta.ws.rs.GET; import jakarta.ws.rs.POST; import jakarta.ws.rs.Path; import jakarta.ws.rs.QueryParam; import io.quarkus.security.PermissionsAllowed; import java.security.BasicPermission; import java.security.Permission; import java.util.Collection; import java.util.Collections; @Path("/crud") public class CRUDResource { @PermissionsAllowed("create") 1 @PermissionsAllowed("update") @POST @Path("/modify/repeated") public String createOrUpdate() { return "modified"; } @PermissionsAllowed(value = {"create", "update"}, inclusive=true) 2 @POST @Path("/modify/inclusive") public String createOrUpdate(Long id) { return id + " modified"; } @PermissionsAllowed({"see:detail", "see:all", "read"}) 3 @GET @Path("/id/{id}") public String getItem(String id) { return "item-detail-" + id; } @PermissionsAllowed(value = "list", permission = CustomPermission.class) 4 @Path("/list") @GET public Collection<String> list(@QueryParam("query-options") String queryOptions) { // your business logic comes here return Collections.emptySet(); } public static class CustomPermission extends BasicPermission { public CustomPermission(String name) { super(name); } @Override public boolean implies(Permission permission) { var event = Arc.container().instance(RoutingContext.class).get(); 5 var publicContent = "public-content".equals(event.request().params().get("query-options")); var hasPermission = getName().equals(permission.getName()); return hasPermission && publicContent; } } } 1 The resource method createOrUpdate is only accessible for a user with both create and update permissions. 2 By default, at least one of the permissions specified through one annotation instance is required. You can require all permissions by setting inclusive=true . Both resource methods createOrUpdate have equal authorization requirements. 3 Access is granted to getItem if SecurityIdentity has either read permission or see permission and one of the all or detail actions. 4 You can use your preferred java.security.Permission implementation. By default, string-based permission is performed by io.quarkus.security.StringPermission . 5 Permissions are not beans, therefore the only way to obtain bean instances is programmatically by using Arc.container() . Caution If you plan to use the @PermissionsAllowed on the IO thread, review the information in Proactive Authentication . Note @PermissionsAllowed is not repeatable on the class level due to a limitation with Quarkus interceptors. For more information, see the Repeatable interceptor bindings section of the Quarkus "CDI reference" guide. The easiest way to add permissions to a role-enabled SecurityIdentity instance is to map roles to permissions. Use Authorization using configuration to grant the required SecurityIdentity permissions for CRUDResource endpoints to authenticated requests, as outlined in the following example: quarkus.http.auth.policy.role-policy1.permissions.user=see:all 1 quarkus.http.auth.policy.role-policy1.permissions.admin=create,update,read 2 quarkus.http.auth.permission.roles1.paths=/crud/modify/*,/crud/id/* 3 quarkus.http.auth.permission.roles1.policy=role-policy1 quarkus.http.auth.policy.role-policy2.permissions.user=list quarkus.http.auth.policy.role-policy2.permission-class=org.acme.crud.CRUDResourceUSDCustomPermission 4 quarkus.http.auth.permission.roles2.paths=/crud/list quarkus.http.auth.permission.roles2.policy=role-policy2 1 Add the permission see and the action all to the SecurityIdentity instance of the user role. Similarly, for the @PermissionsAllowed annotation, io.quarkus.security.StringPermission is used by default. 2 Permissions create , update , and read are mapped to the role admin . 3 The role policy role-policy1 allows only authenticated requests to access /crud/modify and /crud/id sub-paths. For more information about the path-matching algorithm, see Matching multiple paths: longest path wins later in this guide. 4 You can specify a custom implementation of the java.security.Permission class. Your custom class must define exactly one constructor that accepts the permission name and optionally some actions, for example, String array. In this scenario, the permission list is added to the SecurityIdentity instance as new CustomPermission("list") . You can also create a custom java.security.Permission class with additional constructor parameters. These additional parameters get matched with arguments of the method annotated with the @PermissionsAllowed annotation. Later, Quarkus instantiates your custom permission with actual arguments, with which the method annotated with the @PermissionsAllowed has been invoked. Example of a custom java.security.Permission class that accepts additional arguments package org.acme.library; import java.security.Permission; import java.util.Arrays; import java.util.Set; public class LibraryPermission extends Permission { private final Set<String> actions; private final Library library; public LibraryPermission(String libraryName, String[] actions, Library library) { 1 super(libraryName); this.actions = Set.copyOf(Arrays.asList(actions)); this.library = library; } @Override public boolean implies(Permission requiredPermission) { if (requiredPermission instanceof LibraryPermission) { LibraryPermission that = (LibraryPermission) requiredPermission; boolean librariesMatch = getName().equals(that.getName()); boolean requiredLibraryIsSublibrary = library.isParentLibraryOf(that.library); boolean hasOneOfRequiredActions = that.actions.stream().anyMatch(actions::contains); return (librariesMatch || requiredLibraryIsSublibrary) && hasOneOfRequiredActions; } return false; } // here comes your own implementation of the `java.security.Permission` class methods public static abstract class Library { protected String description; abstract boolean isParentLibraryOf(Library library); } public static class MediaLibrary extends Library { @Override boolean isParentLibraryOf(Library library) { return library instanceof MediaLibrary; } } public static class TvLibrary extends MediaLibrary { // TvLibrary specific implementation of the 'isParentLibraryOf' method } } 1 There must be exactly one constructor of a custom Permission class. The first parameter is always considered to be a permission name and must be of type String . Quarkus can optionally pass permission actions to the constructor. For this to happen, declare the second parameter as String[] . The LibraryPermission class permits access to the current or parent library if SecurityIdentity is allowed to perform one of the required actions, for example, read , write , or list . The following example shows how the LibraryPermission class can be used: package org.acme.library; import io.quarkus.security.PermissionsAllowed; import jakarta.enterprise.context.ApplicationScoped; import org.acme.library.LibraryPermission.Library; @ApplicationScoped public class LibraryService { @PermissionsAllowed(value = "tv:write", permission = LibraryPermission.class) 1 public Library updateLibrary(String newDesc, Library update) { update.description = newDesc; return update; } @PermissionsAllowed(value = "tv:write", permission = LibraryPermission.class, params = "library") 2 @PermissionsAllowed(value = {"tv:read", "tv:list"}, permission = LibraryPermission.class) public Library migrateLibrary(Library migrate, Library library) { // migrate libraries return library; } } 1 The formal parameter update is identified as the first Library parameter and gets passed to the LibraryPermission class. However, the LibraryPermission must be instantiated each time the updateLibrary method is invoked. 2 Here, the first Library parameter is migrate ; therefore, the library parameter gets marked explicitly through PermissionsAllowed#params . The permission constructor and the annotated method must have the parameter library set; otherwise, validation fails. Example of a resource secured with the LibraryPermission package org.acme.library; import io.quarkus.security.PermissionsAllowed; import jakarta.inject.Inject; import jakarta.ws.rs.PUT; import jakarta.ws.rs.Path; import jakarta.ws.rs.PathParam; import org.acme.library.LibraryPermission.Library; @Path("/library") public class LibraryResource { @Inject LibraryService libraryService; @PermissionsAllowed(value = "tv:write", permission = LibraryPermission.class) @PUT @Path("/id/{id}") public Library updateLibrary(@PathParam("id") Integer id, Library library) { ... } @PUT @Path("/service-way/id/{id}") public Library updateLibrarySvc(@PathParam("id") Integer id, Library library) { String newDescription = "new description " + id; return libraryService.updateLibrary(newDescription, library); } } Similarly to the CRUDResource example, the following example shows how you can grant a user with the admin role permissions to update MediaLibrary : package org.acme.library; import io.quarkus.runtime.annotations.RegisterForReflection; @RegisterForReflection 1 public class MediaLibraryPermission extends LibraryPermission { public MediaLibraryPermission(String libraryName, String[] actions) { super(libraryName, actions, new MediaLibrary()); 2 } } 1 When building a native executable, the permission class must be registered for reflection unless it is also used in at least one io.quarkus.security.PermissionsAllowed#name parameter. 2 We want to pass the MediaLibrary instance to the LibraryPermission constructor. quarkus.http.auth.policy.role-policy3.permissions.admin=media-library:list,media-library:read,media-library:write 1 quarkus.http.auth.policy.role-policy3.permission-class=org.acme.library.MediaLibraryPermission quarkus.http.auth.permission.roles3.paths=/library/* quarkus.http.auth.permission.roles3.policy=role-policy3 1 Grants the permission media-library , which permits read , write , and list actions. Because MediaLibrary is the TvLibrary class parent, a user with the admin role is also permitted to modify TvLibrary . Tip The /library/* path can be tested from a Keycloak provider Dev UI page, because the user alice which is created automatically by the Dev Services for Keycloak has an admin role. The examples provided so far demonstrate role-to-permission mapping. It is also possible to programmatically add permissions to the SecurityIdentity instance. In the following example, SecurityIdentity is customized to add the same permission that was previously granted with the HTTP role-based policy. Example of adding the LibraryPermission programmatically to SecurityIdentity import java.security.Permission; import java.util.function.Function; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.security.identity.AuthenticationRequestContext; import io.quarkus.security.identity.SecurityIdentity; import io.quarkus.security.identity.SecurityIdentityAugmentor; import io.quarkus.security.runtime.QuarkusSecurityIdentity; import io.smallrye.mutiny.Uni; @ApplicationScoped public class PermissionsIdentityAugmentor implements SecurityIdentityAugmentor { @Override public Uni<SecurityIdentity> augment(SecurityIdentity identity, AuthenticationRequestContext context) { if (isNotAdmin(identity)) { return Uni.createFrom().item(identity); } return Uni.createFrom().item(build(identity)); } private boolean isNotAdmin(SecurityIdentity identity) { return identity.isAnonymous() || !"admin".equals(identity.getPrincipal().getName()); } SecurityIdentity build(SecurityIdentity identity) { Permission possessedPermission = new MediaLibraryPermission("media-library", new String[] { "read", "write", "list"}); 1 return QuarkusSecurityIdentity.builder(identity) .addPermissionChecker(new Function<Permission, Uni<Boolean>>() { 2 @Override public Uni<Boolean> apply(Permission requiredPermission) { boolean accessGranted = possessedPermission.implies(requiredPermission); return Uni.createFrom().item(accessGranted); } }) .build(); } } 1 The permission media-library that was created can perform read , write , and list actions. Because MediaLibrary is the TvLibrary class parent, a user with the admin role is also permitted to modify TvLibrary . 2 You can add a permission checker through io.quarkus.security.runtime.QuarkusSecurityIdentity.Builder#addPermissionChecker . Caution Annotation-based permissions do not work with custom Jakarta REST SecurityContexts because there are no permissions in jakarta.ws.rs.core.SecurityContext . 1.3. References Quarkus Security overview Quarkus Security architecture Authentication mechanisms in Quarkus Basic authentication Getting started with Security by using Basic authentication and Jakarta Persistence OpenID Connect Bearer Token Scopes And SecurityIdentity Permissions
[ "quarkus.http.auth.policy.role-policy1.roles-allowed=user,admin 1", "quarkus.http.auth.permission.permit1.paths=/public/* 1 quarkus.http.auth.permission.permit1.policy=permit quarkus.http.auth.permission.permit1.methods=GET quarkus.http.auth.permission.deny1.paths=/forbidden 2 quarkus.http.auth.permission.deny1.policy=deny quarkus.http.auth.permission.roles1.paths=/roles-secured/*,/other/*,/api/* 3 quarkus.http.auth.permission.roles1.policy=role-policy1", "import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.security.identity.SecurityIdentity; import io.quarkus.vertx.http.runtime.security.HttpSecurityPolicy; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomNamedHttpSecPolicy implements HttpSecurityPolicy { @Override public Uni<CheckResult> checkPermission(RoutingContext event, Uni<SecurityIdentity> identity, AuthorizationRequestContext requestContext) { if (customRequestAuthorization(event)) { return Uni.createFrom().item(CheckResult.PERMIT); } return Uni.createFrom().item(CheckResult.DENY); } @Override public String name() { return \"custom\"; 1 } private static boolean customRequestAuthorization(RoutingContext event) { // here comes your own security check return !event.request().path().endsWith(\"denied\"); } }", "quarkus.http.auth.permission.custom1.paths=/custom/* quarkus.http.auth.permission.custom1.policy=custom 1", "quarkus.http.auth.permission.permit1.paths=/public*,/css/*,/js/*,/robots.txt 1 quarkus.http.auth.permission.permit1.policy=permit quarkus.http.auth.permission.permit1.methods=GET,HEAD", "quarkus.http.auth.permission.permit1.paths=/public/* quarkus.http.auth.permission.permit1.policy=permit quarkus.http.auth.permission.permit1.methods=GET,HEAD quarkus.http.auth.permission.deny1.paths=/public/forbidden-folder/* quarkus.http.auth.permission.deny1.policy=deny", "quarkus.http.auth.policy.user-policy.roles-allowed=user quarkus.http.auth.permission.roles.paths=/api/* quarkus.http.auth.permission.roles.policy=user-policy quarkus.http.auth.permission.public.paths=/api/noauth/* quarkus.http.auth.permission.public.policy=permit", "quarkus.http.auth.permission.secured.paths=/api/*/detail 1 quarkus.http.auth.permission.secured.policy=authenticated quarkus.http.auth.permission.public.paths=/api/public-product/detail 2 quarkus.http.auth.permission.public.policy=permit", "/one/two/three/four/five /one/two/three/four/* /one/two/three/*/five /one/two/three/*/* /one/two/*/four/five /one/*/three/four/five /*/two/three/four/five /*/two/three/*/five /*", "quarkus.http.auth.permission.permit1.paths=/public/* quarkus.http.auth.permission.permit1.policy=permit quarkus.http.auth.permission.permit1.methods=GET,HEAD quarkus.http.auth.permission.deny1.paths=/public/* quarkus.http.auth.permission.deny1.policy=deny", "quarkus.http.auth.policy.user-policy1.roles-allowed=user quarkus.http.auth.policy.admin-policy1.roles-allowed=admin quarkus.http.auth.permission.roles1.paths=/api/*,/restricted/* quarkus.http.auth.permission.roles1.policy=user-policy1 quarkus.http.auth.permission.roles2.paths=/api/*,/admin/* quarkus.http.auth.permission.roles2.policy=admin-policy1", "quarkus.http.auth.permission.permit1.enabled=false quarkus.http.auth.permission.permit1.paths=/public/*,/css/*,/js/*,/robots.txt quarkus.http.auth.permission.permit1.policy=permit quarkus.http.auth.permission.permit1.methods=GET,HEAD", "quarkus.http.auth.permission.permit1.paths=public/*,css/*,js/*,robots.txt", "quarkus.http.auth.permission.permit1.paths=USD{quarkus.http.root-path}/public/*,USD{quarkus.http.root-path}/css/*,USD{quarkus.http.root-path}/js/*,USD{quarkus.http.root-path}/robots.txt", "quarkus.http.auth.permission.permit1.paths=/public/*,css/*,js/*,robots.txt", "quarkus.http.auth.policy.admin-policy1.roles.admin=Admin1 1 quarkus.http.auth.permission.roles1.paths=/* quarkus.http.auth.permission.roles1.policy=admin-policy1", "quarkus.http.auth.permission.custom1.paths=/* quarkus.http.auth.permission.custom1.shared=true 1 quarkus.http.auth.permission.custom1.policy=custom quarkus.http.auth.policy.admin-policy1.roles-allowed=admin quarkus.http.auth.permission.roles1.paths=/admin/* quarkus.http.auth.permission.roles1.policy=admin-policy1", "quarkus.http.auth.policy.role-policy1.roles.root=admin,user 1 quarkus.http.auth.permission.roles1.paths=/secured/* 2 quarkus.http.auth.permission.roles1.policy=role-policy1 quarkus.http.auth.permission.roles1.shared=true quarkus.http.auth.policy.role-policy2.roles-allowed=user 3 quarkus.http.auth.permission.roles2.paths=/secured/user/* quarkus.http.auth.permission.roles2.policy=role-policy2 quarkus.http.auth.policy.role-policy3.roles-allowed=admin quarkus.http.auth.permission.roles3.paths=/secured/admin/* quarkus.http.auth.permission.roles3.policy=role-policy3", "import java.security.Principal; import jakarta.annotation.security.RolesAllowed; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.core.Context; import jakarta.ws.rs.core.SecurityContext; @Path(\"subject\") public class SubjectExposingResource { @GET @Path(\"secured\") @RolesAllowed(\"Tester\") 1 public String getSubjectSecured(@Context SecurityContext sec) { Principal user = sec.getUserPrincipal(); 2 String name = user != null ? user.getName() : \"anonymous\"; return name; } @GET @Path(\"unsecured\") @PermitAll 3 public String getSubjectUnsecured(@Context SecurityContext sec) { Principal user = sec.getUserPrincipal(); 4 String name = user != null ? user.getName() : \"anonymous\"; return name; } @GET @Path(\"denied\") @DenyAll 5 public String getSubjectDenied(@Context SecurityContext sec) { Principal user = sec.getUserPrincipal(); String name = user != null ? user.getName() : \"anonymous\"; return name; } }", "admin=Administrator tester.group=Software tester.role=Tester %prod.secured=User %dev.secured=** all-roles=Administrator,Software,Tester,User", "import java.security.Principal; import jakarta.annotation.security.DenyAll; import jakarta.annotation.security.PermitAll; import jakarta.annotation.security.RolesAllowed; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.core.Context; import jakarta.ws.rs.core.SecurityContext; @Path(\"subject\") public class SubjectExposingResource { @GET @Path(\"admin\") @RolesAllowed(\"USD{admin}\") 1 public String getSubjectSecuredAdmin(@Context SecurityContext sec) { return getUsername(sec); } @GET @Path(\"software-tester\") @RolesAllowed(\"USD{tester.group}-USD{tester.role}\") 2 public String getSubjectSoftwareTester(@Context SecurityContext sec) { return getUsername(sec); } @GET @Path(\"user\") @RolesAllowed(\"USD{customer:User}\") 3 public String getSubjectUser(@Context SecurityContext sec) { return getUsername(sec); } @GET @Path(\"secured\") @RolesAllowed(\"USD{secured}\") 4 public String getSubjectSecured(@Context SecurityContext sec) { return getUsername(sec); } @GET @Path(\"list\") @RolesAllowed(\"USD{all-roles}\") 5 public String getSubjectList(@Context SecurityContext sec) { return getUsername(sec); } private String getUsername(SecurityContext sec) { Principal user = sec.getUserPrincipal(); String name = user != null ? user.getName() : \"anonymous\"; return name; } }", "package org.acme.crud; import io.quarkus.arc.Arc; import io.vertx.ext.web.RoutingContext; import jakarta.ws.rs.GET; import jakarta.ws.rs.POST; import jakarta.ws.rs.Path; import jakarta.ws.rs.QueryParam; import io.quarkus.security.PermissionsAllowed; import java.security.BasicPermission; import java.security.Permission; import java.util.Collection; import java.util.Collections; @Path(\"/crud\") public class CRUDResource { @PermissionsAllowed(\"create\") 1 @PermissionsAllowed(\"update\") @POST @Path(\"/modify/repeated\") public String createOrUpdate() { return \"modified\"; } @PermissionsAllowed(value = {\"create\", \"update\"}, inclusive=true) 2 @POST @Path(\"/modify/inclusive\") public String createOrUpdate(Long id) { return id + \" modified\"; } @PermissionsAllowed({\"see:detail\", \"see:all\", \"read\"}) 3 @GET @Path(\"/id/{id}\") public String getItem(String id) { return \"item-detail-\" + id; } @PermissionsAllowed(value = \"list\", permission = CustomPermission.class) 4 @Path(\"/list\") @GET public Collection<String> list(@QueryParam(\"query-options\") String queryOptions) { // your business logic comes here return Collections.emptySet(); } public static class CustomPermission extends BasicPermission { public CustomPermission(String name) { super(name); } @Override public boolean implies(Permission permission) { var event = Arc.container().instance(RoutingContext.class).get(); 5 var publicContent = \"public-content\".equals(event.request().params().get(\"query-options\")); var hasPermission = getName().equals(permission.getName()); return hasPermission && publicContent; } } }", "quarkus.http.auth.policy.role-policy1.permissions.user=see:all 1 quarkus.http.auth.policy.role-policy1.permissions.admin=create,update,read 2 quarkus.http.auth.permission.roles1.paths=/crud/modify/*,/crud/id/* 3 quarkus.http.auth.permission.roles1.policy=role-policy1 quarkus.http.auth.policy.role-policy2.permissions.user=list quarkus.http.auth.policy.role-policy2.permission-class=org.acme.crud.CRUDResourceUSDCustomPermission 4 quarkus.http.auth.permission.roles2.paths=/crud/list quarkus.http.auth.permission.roles2.policy=role-policy2", "package org.acme.library; import java.security.Permission; import java.util.Arrays; import java.util.Set; public class LibraryPermission extends Permission { private final Set<String> actions; private final Library library; public LibraryPermission(String libraryName, String[] actions, Library library) { 1 super(libraryName); this.actions = Set.copyOf(Arrays.asList(actions)); this.library = library; } @Override public boolean implies(Permission requiredPermission) { if (requiredPermission instanceof LibraryPermission) { LibraryPermission that = (LibraryPermission) requiredPermission; boolean librariesMatch = getName().equals(that.getName()); boolean requiredLibraryIsSublibrary = library.isParentLibraryOf(that.library); boolean hasOneOfRequiredActions = that.actions.stream().anyMatch(actions::contains); return (librariesMatch || requiredLibraryIsSublibrary) && hasOneOfRequiredActions; } return false; } // here comes your own implementation of the `java.security.Permission` class methods public static abstract class Library { protected String description; abstract boolean isParentLibraryOf(Library library); } public static class MediaLibrary extends Library { @Override boolean isParentLibraryOf(Library library) { return library instanceof MediaLibrary; } } public static class TvLibrary extends MediaLibrary { // TvLibrary specific implementation of the 'isParentLibraryOf' method } }", "package org.acme.library; import io.quarkus.security.PermissionsAllowed; import jakarta.enterprise.context.ApplicationScoped; import org.acme.library.LibraryPermission.Library; @ApplicationScoped public class LibraryService { @PermissionsAllowed(value = \"tv:write\", permission = LibraryPermission.class) 1 public Library updateLibrary(String newDesc, Library update) { update.description = newDesc; return update; } @PermissionsAllowed(value = \"tv:write\", permission = LibraryPermission.class, params = \"library\") 2 @PermissionsAllowed(value = {\"tv:read\", \"tv:list\"}, permission = LibraryPermission.class) public Library migrateLibrary(Library migrate, Library library) { // migrate libraries return library; } }", "package org.acme.library; import io.quarkus.security.PermissionsAllowed; import jakarta.inject.Inject; import jakarta.ws.rs.PUT; import jakarta.ws.rs.Path; import jakarta.ws.rs.PathParam; import org.acme.library.LibraryPermission.Library; @Path(\"/library\") public class LibraryResource { @Inject LibraryService libraryService; @PermissionsAllowed(value = \"tv:write\", permission = LibraryPermission.class) @PUT @Path(\"/id/{id}\") public Library updateLibrary(@PathParam(\"id\") Integer id, Library library) { } @PUT @Path(\"/service-way/id/{id}\") public Library updateLibrarySvc(@PathParam(\"id\") Integer id, Library library) { String newDescription = \"new description \" + id; return libraryService.updateLibrary(newDescription, library); } }", "package org.acme.library; import io.quarkus.runtime.annotations.RegisterForReflection; @RegisterForReflection 1 public class MediaLibraryPermission extends LibraryPermission { public MediaLibraryPermission(String libraryName, String[] actions) { super(libraryName, actions, new MediaLibrary()); 2 } }", "quarkus.http.auth.policy.role-policy3.permissions.admin=media-library:list,media-library:read,media-library:write 1 quarkus.http.auth.policy.role-policy3.permission-class=org.acme.library.MediaLibraryPermission quarkus.http.auth.permission.roles3.paths=/library/* quarkus.http.auth.permission.roles3.policy=role-policy3", "import java.security.Permission; import java.util.function.Function; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.security.identity.AuthenticationRequestContext; import io.quarkus.security.identity.SecurityIdentity; import io.quarkus.security.identity.SecurityIdentityAugmentor; import io.quarkus.security.runtime.QuarkusSecurityIdentity; import io.smallrye.mutiny.Uni; @ApplicationScoped public class PermissionsIdentityAugmentor implements SecurityIdentityAugmentor { @Override public Uni<SecurityIdentity> augment(SecurityIdentity identity, AuthenticationRequestContext context) { if (isNotAdmin(identity)) { return Uni.createFrom().item(identity); } return Uni.createFrom().item(build(identity)); } private boolean isNotAdmin(SecurityIdentity identity) { return identity.isAnonymous() || !\"admin\".equals(identity.getPrincipal().getName()); } SecurityIdentity build(SecurityIdentity identity) { Permission possessedPermission = new MediaLibraryPermission(\"media-library\", new String[] { \"read\", \"write\", \"list\"}); 1 return QuarkusSecurityIdentity.builder(identity) .addPermissionChecker(new Function<Permission, Uni<Boolean>>() { 2 @Override public Uni<Boolean> apply(Permission requiredPermission) { boolean accessGranted = possessedPermission.implies(requiredPermission); return Uni.createFrom().item(accessGranted); } }) .build(); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/authorization_of_web_endpoints/security-authorize-web-endpoints-reference
Chapter 15. Red Hat Enterprise Linux Atomic Host 7.6.3
Chapter 15. Red Hat Enterprise Linux Atomic Host 7.6.3 15.1. Atomic Host OStree update : New Tree Version: 7.6.3 (hash: d3fc043862e78ecb2b4f3f16938414039bc0a29a069ab6b6dfb3d4ae3a1494e8) Changes since Tree Version 7.6.2 (hash: 50c320468370132958eeeffb90a23431a5bd1cc717aa68d969eb471d78879e66) Updated packages : microdnf-2-6.el7 15.2. Extras Updated packages : docker-1.13.1-94.gitb2f74b2.el7 dpdk-18.11-3.el7_6 15.2.1. Container Images New : Red Hat Universal Base Image 7 Container Image (rhel7/ubi7-container) Updated : Red Hat Enterprise Linux 7.6 Container Image (rhel7.6, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux 7.6 Container Image for aarch64 (rhel7.6, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux Atomic Support Tools Container Image (rhel7/support-tools) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic Net-SNMP Container Image (rhel7/net-snmp) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic OpenSCAP Container Image (rhel7/openscap) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_6_3
3.6. Listing Domains
3.6. Listing Domains The realm list command lists every configured domain for the system, as well as the full details and default configuration for that domain. This is the same information as is returned by the realm discovery command, only for a domain that is already in the system configuration. The most notable options accepted by realm list are: --all The --all option lists all discovered domains, both configured and unconfigured. --name-only The --name-only option limits the results to the domain names and does not display the domain configuration details. For more information about the realm list command, see the realm (8) man page.
[ "realm list --all --name-only ad.example.com" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/listing-domains-realmd
Hot Rod Node.JS Client Guide
Hot Rod Node.JS Client Guide Red Hat Data Grid 8.4 Configure and use Hot Rod JS clients Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/hot_rod_node.js_client_guide/index
1.6. LVS - A Block Diagram
1.6. LVS - A Block Diagram LVS routers use a collection of programs to monitor cluster members and cluster services. Figure 1.5, "LVS Components" illustrates how these various programs on both the active and backup LVS routers work together to manage the cluster. Figure 1.5. LVS Components The pulse daemon runs on both the active and passive LVS routers. On the backup router, pulse sends a heartbeat to the public interface of the active router to make sure the active router is still properly functioning. On the active router, pulse starts the lvs daemon and responds to heartbeat queries from the backup LVS router. Once started, the lvs daemon calls the ipvsadm utility to configure and maintain the IPVS routing table in the kernel and starts a nanny process for each configured virtual server on each real server. Each nanny process checks the state of one configured service on one real server, and tells the lvs daemon if the service on that real server is malfunctioning. If a malfunction is detected, the lvs daemon instructs ipvsadm to remove that real server from the IPVS routing table. If the backup router does not receive a response from the active router, it initiates failover by calling send_arp to reassign all virtual IP addresses to the NIC hardware addresses ( MAC address) of the backup node, sends a command to the active router via both the public and private network interfaces to shut down the lvs daemon on the active router, and starts the lvs daemon on the backup node to accept requests for the configured virtual servers. 1.6.1. LVS Components Section 1.6.1.1, " pulse " shows a detailed list of each software component in an LVS router. 1.6.1.1. pulse This is the controlling process which starts all other daemons related to LVS routers. At boot time, the daemon is started by the /etc/rc.d/init.d/pulse script. It then reads the configuration file /etc/sysconfig/ha/lvs.cf . On the active router, pulse starts the LVS daemon. On the backup router, pulse determines the health of the active router by executing a simple heartbeat at a user-configurable interval. If the active router fails to respond after a user-configurable interval, it initiates failover. During failover, pulse on the backup router instructs the pulse daemon on the active router to shut down all LVS services, starts the send_arp program to reassign the floating IP addresses to the backup router's MAC address, and starts the lvs daemon. 1.6.1.2. lvs The lvs daemon runs on the active LVS router once called by pulse . It reads the configuration file /etc/sysconfig/ha/lvs.cf , calls the ipvsadm utility to build and maintain the IPVS routing table, and assigns a nanny process for each configured LVS service. If nanny reports a real server is down, lvs instructs the ipvsadm utility to remove the real server from the IPVS routing table. 1.6.1.3. ipvsadm This service updates the IPVS routing table in the kernel. The lvs daemon sets up and administers LVS by calling ipvsadm to add, change, or delete entries in the IPVS routing table. 1.6.1.4. nanny The nanny monitoring daemon runs on the active LVS router. Through this daemon, the active router determines the health of each real server and, optionally, monitors its workload. A separate process runs for each service defined on each real server. 1.6.1.5. /etc/sysconfig/ha/lvs.cf This is the LVS configuration file. Directly or indirectly, all daemons get their configuration information from this file. 1.6.1.6. Piranha Configuration Tool This is the Web-based tool for monitoring, configuring, and administering LVS. This is the default tool to maintain the /etc/sysconfig/ha/lvs.cf LVS configuration file. 1.6.1.7. send_arp This program sends out ARP broadcasts when the floating IP address changes from one node to another during failover. Chapter 2, Initial LVS Configuration reviews important post-installation configuration steps you should take before configuring Red Hat Enterprise Linux to be an LVS router.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-block-diagram-VSA
Chapter 36. StatefulSetTemplate schema reference
Chapter 36. StatefulSetTemplate schema reference Used in: KafkaClusterTemplate , ZookeeperClusterTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate podManagementPolicy PodManagementPolicy which will be used for this StatefulSet. Valid values are Parallel and OrderedReady . Defaults to Parallel . string (one of [OrderedReady, Parallel])
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-statefulsettemplate-reference
Chapter 1. Overview of nodes
Chapter 1. Overview of nodes 1.1. About nodes A node is a virtual or bare-metal machine in a Kubernetes cluster. Worker nodes host your application containers, grouped as pods. The control plane nodes run services that are required to control the Kubernetes cluster. In OpenShift Container Platform, the control plane nodes contain more than just the Kubernetes services for managing the OpenShift Container Platform cluster. Having stable and healthy nodes in a cluster is fundamental to the smooth functioning of your hosted application. In OpenShift Container Platform, you can access, manage, and monitor a node through the Node object representing the node. Using the OpenShift CLI ( oc ) or the web console, you can perform the following operations on a node. The following components of a node are responsible for maintaining the running of pods and providing the Kubernetes runtime environment. Container runtime The container runtime is responsible for running containers. Kubernetes offers several runtimes such as containerd, cri-o, rktlet, and Docker. Kubelet Kubelet runs on nodes and reads the container manifests. It ensures that the defined containers have started and are running. The kubelet process maintains the state of work and the node server. Kubelet manages network rules and port forwarding. The kubelet manages containers that are created by Kubernetes only. Kube-proxy Kube-proxy runs on every node in the cluster and maintains the network traffic between the Kubernetes resources. A Kube-proxy ensures that the networking environment is isolated and accessible. DNS Cluster DNS is a DNS server which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. Read operations The read operations allow an administrator or a developer to get information about nodes in an OpenShift Container Platform cluster. List all the nodes in a cluster . Get information about a node, such as memory and CPU usage, health, status, and age. List pods running on a node . Management operations As an administrator, you can easily manage a node in an OpenShift Container Platform cluster through several tasks: Add or update node labels . A label is a key-value pair applied to a Node object. You can control the scheduling of pods using labels. Change node configuration using a custom resource definition (CRD), or the kubeletConfig object. Configure nodes to allow or disallow the scheduling of pods. Healthy worker nodes with a Ready status allow pod placement by default while the control plane nodes do not; you can change this default behavior by configuring the worker nodes to be unschedulable and the control plane nodes to be schedulable . Allocate resources for nodes using the system-reserved setting. You can allow OpenShift Container Platform to automatically determine the optimal system-reserved CPU and memory resources for your nodes, or you can manually determine and set the best resources for your nodes. Configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit, or both. Reboot a node gracefully using pod anti-affinity . Delete a node from a cluster by scaling down the cluster using a compute machine set. To delete a node from a bare-metal cluster, you must first drain all pods on the node and then manually delete the node. Enhancement operations OpenShift Container Platform allows you to do more than just access and manage nodes; as an administrator, you can perform the following tasks on nodes to make the cluster more efficient, application-friendly, and to provide a better environment for your developers. Manage node-level tuning for high-performance applications that require some level of kernel tuning by using the Node Tuning Operator . Enable TLS security profiles on the node to protect communication between the kubelet and the Kubernetes API server. Run background tasks on nodes automatically with daemon sets . You can create and use daemon sets to create shared storage, run a logging pod on every node, or deploy a monitoring agent on all nodes. Free node resources using garbage collection . You can ensure that your nodes are running efficiently by removing terminated containers and the images not referenced by any running pods. Add kernel arguments to a set of nodes . Configure an OpenShift Container Platform cluster to have worker nodes at the network edge (remote worker nodes). For information on the challenges of having remote worker nodes in an OpenShift Container Platform cluster and some recommended approaches for managing pods on a remote worker node, see Using remote worker nodes at the network edge . 1.2. About pods A pod is one or more containers deployed together on a node. As a cluster administrator, you can define a pod, assign it to run on a healthy node that is ready for scheduling, and manage. A pod runs as long as the containers are running. You cannot change a pod once it is defined and is running. Some operations you can perform when working with pods are: Read operations As an administrator, you can get information about pods in a project through the following tasks: List pods associated with a project , including information such as the number of replicas and restarts, current status, and age. View pod usage statistics such as CPU, memory, and storage consumption. Management operations The following list of tasks provides an overview of how an administrator can manage pods in an OpenShift Container Platform cluster. Control scheduling of pods using the advanced scheduling features available in OpenShift Container Platform: Node-to-pod binding rules such as pod affinity , node affinity , and anti-affinity . Node labels and selectors . Taints and tolerations . Pod topology spread constraints . Secondary scheduling . Configure the descheduler to evict pods based on specific strategies so that the scheduler reschedules the pods to more appropriate nodes. Configure how pods behave after a restart using pod controllers and restart policies . Limit both egress and ingress traffic on a pod . Add and remove volumes to and from any object that has a pod template . A volume is a mounted file system available to all the containers in a pod. Container storage is ephemeral; you can use volumes to persist container data. Enhancement operations You can work with pods more easily and efficiently with the help of various tools and features available in OpenShift Container Platform. The following operations involve using those tools and features to better manage pods. Operation User More information Create and use a horizontal pod autoscaler. Developer You can use a horizontal pod autoscaler to specify the minimum and the maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target. Using a horizontal pod autoscaler, you can automatically scale pods . Install and use a vertical pod autoscaler . Administrator and developer As an administrator, use a vertical pod autoscaler to better use cluster resources by monitoring the resources and the resource requirements of workloads. As a developer, use a vertical pod autoscaler to ensure your pods stay up during periods of high demand by scheduling pods to nodes that have enough resources for each pod. Provide access to external resources using device plugins. Administrator A device plugin is a gRPC service running on nodes (external to the kubelet), which manages specific hardware resources. You can deploy a device plugin to provide a consistent and portable solution to consume hardware devices across clusters. Provide sensitive data to pods using the Secret object . Administrator Some applications need sensitive information, such as passwords and usernames. You can use the Secret object to provide such information to an application pod. 1.3. About containers A container is the basic unit of an OpenShift Container Platform application, which comprises the application code packaged along with its dependencies, libraries, and binaries. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Linux container technologies are lightweight mechanisms for isolating running processes and limiting access to only designated resources. As an administrator, You can perform various tasks on a Linux container, such as: Copy files to and from a container . Allow containers to consume API objects . Execute remote commands in a container . Use port forwarding to access applications in a container . OpenShift Container Platform provides specialized containers called Init containers . Init containers run before application containers and can contain utilities or setup scripts not present in an application image. You can use an Init container to perform tasks before the rest of a pod is deployed. Apart from performing specific tasks on nodes, pods, and containers, you can work with the overall OpenShift Container Platform cluster to keep the cluster efficient and the application pods highly available. 1.4. About autoscaling pods on a node OpenShift Container Platform offers three tools that you can use to automatically scale the number of pods on your nodes and the resources allocated to pods. Horizontal Pod Autoscaler The Horizontal Pod Autoscaler (HPA) can automatically increase or decrease the scale of a replication controller or deployment configuration, based on metrics collected from the pods that belong to that replication controller or deployment configuration. For more information, see Automatically scaling pods with the horizontal pod autoscaler . Custom Metrics Autoscaler The Custom Metrics Autoscaler can automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not based only on CPU or memory. For more information, see Custom Metrics Autoscaler Operator overview . Vertical Pod Autoscaler The Vertical Pod Autoscaler (VPA) can automatically review the historic and current CPU and memory resources for containers in pods and can update the resource limits and requests based on the usage values it learns. For more information, see Automatically adjust pod resource levels with the vertical pod autoscaler . 1.5. Glossary of common terms for OpenShift Container Platform nodes This glossary defines common terms that are used in the node content. Container It is a lightweight and executable image that comprises software and all its dependencies. Containers virtualize the operating system, as a result, you can run containers anywhere from a data center to a public or private cloud to even a developer's laptop. Daemon set Ensures that a replica of the pod runs on eligible nodes in an OpenShift Container Platform cluster. egress The process of data sharing externally through a network's outbound traffic from a pod. garbage collection The process of cleaning up cluster resources, such as terminated containers and images that are not referenced by any running pods. Horizontal Pod Autoscaler(HPA) Implemented as a Kubernetes API resource and a controller. You can use the HPA to specify the minimum and maximum number of pods that you want to run. You can also specify the CPU or memory utilization that your pods should target. The HPA scales out and scales in pods when a given CPU or memory threshold is crossed. Ingress Incoming traffic to a pod. Job A process that runs to completion. A job creates one or more pod objects and ensures that the specified pods are successfully completed. Labels You can use labels, which are key-value pairs, to organise and select subsets of objects, such as a pod. Node A worker machine in the OpenShift Container Platform cluster. A node can be either be a virtual machine (VM) or a physical machine. Node Tuning Operator You can use the Node Tuning Operator to manage node-level tuning by using the TuneD daemon. It ensures custom tuning specifications are passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Self Node Remediation Operator The Operator runs on the cluster nodes and identifies and reboots nodes that are unhealthy. Pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. Toleration Indicates that the pod is allowed (but not required) to be scheduled on nodes or node groups with matching taints. You can use tolerations to enable the scheduler to schedule pods with matching taints. Taint A core object that comprises a key,value, and effect. Taints and tolerations work together to ensure that pods are not scheduled on irrelevant nodes.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/nodes/overview-of-nodes
7.2. Raising the Domain Level
7.2. Raising the Domain Level Important This is a non-reversible operation. If you raise the domain level from 0 to 1 , you cannot downgrade from 1 to 0 again. Command Line: Raising the Domain Level Log in as the administrator: Run the ipa domainlevel-set command and provide the required level: Web UI: Raising the Domain Level Select IPA Server Topology Domain Level . Click Set Domain Level .
[ "kinit admin", "ipa domainlevel-set 1 ----------------------- Current domain level: 1 -----------------------" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/domain-level-set
Chapter 9. Running certification tests for OpenStack certification
Chapter 9. Running certification tests for OpenStack certification Run certification tests on the OpenStack deployment under test based on the type of OpenStack application undergoing certification. 9.1. Running certification tests for products implementing OpenStack APIs If the OpenStack application undergoing certification implements OpenStack APIs, complete the following steps on the test server to run certification tests on the OpenStack deployment under test or test client. This category includes OpenStack plugins and drivers which implement OpenStack APIs for Networking, Block Storage, and File Share services. Additional resources For more information about products implementing OpenStack APIs, see Red Hat OpenStack Certification Policy Guide . 9.1.1. Running tempest_config test The tempest_config test automatically generates a tempest.conf file at run time. If you need to change the default configurations of the test, replace tempest.conf with a new file at the same location. Although the updated configuration can address any known tempest issues, note that tempest still needs to fulfill certification testing requirements. Prerequisites You have subscribed the application under test to the OpenStack product repositories to allow tempest to get installed. You have OpenStack administrator login privileges and credentials. Procedure The test is interactive. It checks for the presence of the tempest.conf file at location /etc/redhat-certification/openstack . If the file exists, you will receive a prompt asking if you want to replace it and enter the details manually. If you choose no , the test will use the keystone credentials from the existing file and proceed. However, if you choose yes , or if the tempest.conf file is not present in the location you will be prompted to provide the following details: In the keystone auth url field, enter the URL to allow the test access the OpenStack platform service endpoints. Enter the OpenStack administrator username and password. Update the tempest.conf file to enable all the flags applicable for the plugins that you are certifying. Click Submit . Additional resources If you face any tempest issues unrelated to the certification testing, use the following links to raise bugs: Upstream tempest project For Downstream bugs, use either RHOSP Tempest component or Red Hat Certification Component . In the Component field: Select openstack-tempest for tempest-related issues. Select openstack-neutron , openstack-cinder , or openstack-manila for component-related issues. 9.2. Running certification tests for products consuming OpenStack APIs Red Hat considers the following as products or applications consuming OpenStack APIs: Products that facilitate deploying an OpenStack environment. Products that complement the cloud infrastructure with additional functionality, such as configuration, scaling, and management. Applications for OpenStack management and monitoring. Applications that are OpenStack-enabled, such as virtual network functions (VNFs). If the OpenStack application that you are certifying consumes OpenStack APIs, perform the following steps: Procedure Review the policy information described in the Red Hat OpenStack Certification Policy Guide . Run the certification tests as described in the Setting up the test server section . 9.3. Running trusted container test Procedure Navigate to the rhcert tool home page and select the trusted container test. Click on the Run Selected . Perform the following actions when the test prompts you: Provide the reason why you configured non-Red Hat containers on the host under test. Select the checkboxes of the containers you want to run the test on. 9.4. Running the OpenStack director test and the supportability tests Procedure On the Red Hat Certification home page, click the Server settings tab. In the Register a System field, enter the hostname or IP address of the overcloud node where you installed the application under test. Then, click Add . Click the existing product entry from the Red Hat Certification home page. Then, click the relevant certification entry from the Certifications page. The Progress page opens and displays the tests available. It also displays the status of the runs, if any. Click Testing to open the Testing tab. On the Testing tab, click Select Test Systems . On the Select Host page, select the hostname of the overcloud node where you installed the application-under-test. Then, click Test to return to the Testing tab. The rhcert tool now creates a certification test plan for the application-under-test. When the test plan is ready, the status column displays "Finished test run". The Continue Testing button also appears. Click Continue Testing . Select interactive to the openstack/supportable checkbox and then click Run Selected . Certification tests are run on the application-under-test. The status of the certification test run is displayed on the Testing Page under the relevant hostname. The tool now runs the certification tests. You can find the status of the test run on the Testing tab under the relevant hostname. After the test run completes, the test logs from the openstack/supportable tests are stored in the same log file as for the openstack/director test on the test server. 9.5. Additional Resources For more information about certification targets, see Red Hat OpenStack Certification Policy Guide .
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_workflow_guide/assembly-certification-tests-for-openstack-certification_rhosp-wf-configure-hosts-run-tests-use-cli
3.4. Creating a SecureBoot Red Hat Enterprise Linux 7 Guest with virt-manager
3.4. Creating a SecureBoot Red Hat Enterprise Linux 7 Guest with virt-manager This procedure covers creating a SecureBoot Red Hat Enterprise Linux 7 guest virtual machine with a locally stored installation DVD or DVD image. Red Hat Enterprise Linux 7 DVD images are available from the Red Hat Customer Portal . The SecureBoot feature ensures that your VM is running a cryptographically signed OS. If the guest OS of a VM has been altered by malware, SecureBoot prevents the VM from booting, which stops the potential spread of the malware to your host machine. Procedure 3.1. Creating a SecureBoot Red Hat Enterprise Linux 7 guest virtual machine with virt-manager using local installation media Perform steps 1 to 6 of Creating a Red Hat Enterprise Linux 7 Guest with virt-manager . Name and final configuration Name the virtual machine. Virtual machine names can contain letters, numbers and the following characters: underscores ( _ ), periods ( . ), and hyphens ( - ). Virtual machine names must be unique for migration and cannot consist only of numbers. By default, the virtual machine will be created with network address translation (NAT) for a network called 'default' . To change the network selection, click Network selection and select a host device and source mode. Figure 3.1. Verifying the configuration To further configure the virtual machine's hardware, check the Customize configuration before install check box to change the guest's storage or network devices, to use the paravirtualized (virtio) drivers or to add additional devices. Verify the settings of the virtual machine and click Finish when you are satisfied. This will open a new wizard for futher configuring your virtual machine. Customize virtual machine hardware In the overview section of the wizard, select Q35 in the Chipset drop-down menu. In the Firmware drop-down menu, select UEFI x86_64. Figure 3.2. The configure hardware window Verify the settings of the virtual machine and click Apply when you are satisfied. Click Begin Installation to create a virtual machine with the specified networking settings, virtualization type, and architecture. A SecureBoot Red Hat Enterprise Linux 7 guest virtual machine is now created from an ISO installation disk image.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_security_guide/sect-virtualization_security_guide-guest_security-creating_a_secureboot_red_hat_enterprise_linux_7_guest_with_local_installation_media