title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
B.42. libnl
B.42. libnl B.42.1. RHBA-2011:0325 - libnl bug fix update Updated libnl packages that fix a bug are now available for Red Hat Enterprise Linux 6. The libnl package contains a convenience library to simplify using the Linux kernel netlink sockets interface for network manipulation. Bug Fix BZ# 676327 Some nl_send_auto_complete() callers did not free the allocated message when errors were reported, resulting in libnl leaking memory. A problem in its own right, these small leaks also made it more work to detect memory leaks in other processes. With this update, allocated messages are freed correctly when nl_send_auto_complete() is called, and libnl no longer leaks memory in this circumstance. All libnl users should upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/libnl
11.6. Using an External Provisioning System for Users and Groups
11.6. Using an External Provisioning System for Users and Groups Identity Management supports configuring your environment, so that an external solution for managing identities is used to provision user and group identities in IdM. This section describes an example of such configuration. The example includes: Section 11.6.1, "Configuring User Accounts to Be Used by the External Provisioning System" Section 11.6.2, "Configuring IdM to Automatically Activate Stage User Accounts" Section 11.6.3, "Configuring the LDAP Provider of the External Provisioning System to Manage the IdM Identities" 11.6.1. Configuring User Accounts to Be Used by the External Provisioning System This procedure shows how to configure two IdM user accounts to be used by the external provisioning system. By adding the accounts to a group with an appropriate password policy, you enable the external provisioning system to manage user provisioning in IdM. Create a user, provisionator , with the privileges to add stage users. The user account will be used by the external provisioning system to add new stage users. Add the provisionator user account: Grant the provisionator user the required privileges. Create a custom role, System Provisioning , to manage adding stage users: Add the Stage User Provisioning privilege to the role. This privilege provides the ability to add stage users: Add the provisionator user to the role: Create a user, activator , with the privileges to manage user accounts. The user account will be used to automatically activate stage users added by the external provisioning system. Add the activator user account: Grant the activator user the required privileges. Add the user to the default User Administrator role: Create a user group for service and application accounts: Update the password policy for the group. The following policy prevents password expiration and lockout for the account but compensates the potential risks by requiring complex passwords: Add the provisioning and activation accounts to the group for service and application accounts: Change the passwords for the user accounts: Changing the passwords is necessary because passwords of new IdM users expire immediately. Additional resources : For details on adding new users, see Section 11.2.1, "Adding Stage or Active Users" . For details on granting users the privileges required to manage other user accounts, see Section 11.5, "Allowing Non-admin Users to Manage User Entries" . For details on managing IdM password policies, see Chapter 28, Defining Password Policies . 11.6.2. Configuring IdM to Automatically Activate Stage User Accounts This procedure shows how to create a script for activating stage users. The system runs the script automatically at specified time intervals. This ensures that new user accounts are automatically activated and available for use shortly after they are created. Important The procedure assumes that the new user accounts do not require validation before the script adds them to IdM. For example, validation is not required when the users have already been validated by the owner of the external provisioning system. It is sufficient to enable the activation process on only one of your IdM servers. Generate a keytab file for the activation account: If you want to enable the activation process on more than one IdM server, generate the keytab file on one server only. Then copy the keytab file to the other servers. Create a script, /usr/local/sbin/ipa-activate-all , with the following contents to activate all users: Edit the permissions and ownership for the ipa-activate-all script to make it executable: Create a systemd unit file, /etc/systemd/system/ipa-activate-all.service , with the following contents: Create a systemd timer, /etc/systemd/system/ipa-activate-all.timer , with the following contents: Enable ipa-activate-all.timer : Additional resources: For more information on systemd unit files, see the Managing Services with systemd Unit Files chapter of the System Administrator's Guide . 11.6.3. Configuring the LDAP Provider of the External Provisioning System to Manage the IdM Identities This section shows templates for various user and group management operations. Using these templates, you can configure the LDAP provider of your provisioning system to manage IdM user accounts. For example, you can configure the system to inactivate a user account after the employee has left the company. Managing User Accounts Using LDAP You can add new user entries, modify existing entries, move users between different life cycle states, or delete users by editing the underlying Directory Server database. To edit the database, use the ldapmodify utility. The following LDIF-formatted templates provide information on what attributes to modify using ldapmodify . For detailed example procedures, see Example 11.2, "Adding a Stage User with ldapmodify " and Example 11.3, "Preserving a User with ldapmodify " . Adding a new stage user Adding a user with UID and GID automatically assigned: Adding a user with UID and GID statically assigned: You are not required to specify any IdM object classes when adding stage users. IdM adds these classes automatically after the users are activated. Note that the distinguished name (DN) of the created entry must start with uid= user_login . Modifying existing users Before modifying a user, obtain the user's distinguished name (DN) by searching by the user's login. In the following example, the user_allowed_to_read user in the following example is a user allowed to read user and group information, and password is this user's password: To modify a user's attribute: To disable a user: To enable a user: To preserve a user: Updating the nssAccountLock attribute has no effect on stage and preserved users. Even though the update operation completes successfully, the attribute value remains nssAccountLock: TRUE . Creating a new group To create a new group: Modifying groups Before modifying a group, obtain the group's distinguished name (DN) by searching by the group's name. To delete an existing group: To add a member to a group: To remove a member from a group: Do not add stage or preserved users to groups. Even though the update operation completes successfully, the users will not be updated as members of the group. Only active users can belong to groups. Example 11.2. Adding a Stage User with ldapmodify To add a new stageuser user using the standard interorgperson object class: Use ldapmodify to add the user. Consider validating the contents of the stage entry to make sure your provisioning system added all required POSIX attributes and the stage entry is ready to be activated. To display the new stage user's LDAP attributes using the ipa stageuser-show --all --raw command. Note that the user is explicitly disabled by the nsaccountlock attribute: Example 11.3. Preserving a User with ldapmodify To preserve user by using the LDAP modrdn operation: Use the ldapmodify utility to modify the user entry. Optionally, verify the user has been preserved by listing all preserved users.
[ "ipa user-add provisionator --first=provisioning --last=account --password", "ipa role-add --desc \"Responsible for provisioning stage users\" \"System Provisioning\"", "ipa role-add-privilege \"System Provisioning\" --privileges=\"Stage User Provisioning\"", "ipa role-add-member --users=provisionator \"System Provisioning\"", "ipa user-add activator --first=activation --last=account --password", "ipa role-add-member --users=activator \"User Administrator\"", "ipa group-add service-accounts", "ipa pwpolicy-add service-accounts --maxlife=10000 --minlife=0 --history=0 --minclasses=4 --minlength=20 --priority=1 --maxfail=0 --failinterval=1 --lockouttime=0", "ipa group-add-member service-accounts --users={provisionator,activator}", "kpasswd provisionator kpasswd activator", "ipa-getkeytab -s example.com -p \"activator\" -k /etc/krb5.ipa-activation.keytab", "#!/bin/bash kinit -k -i activator ipa stageuser-find --all --raw | grep \" uid:\" | cut -d \":\" -f 2 | while read uid; do ipa stageuser-activate USD{uid}; done", "chmod 755 /usr/local/sbin/ipa-activate-all chown root:root /usr/local/sbin/ipa-activate-all", "[Unit] Description=Scan IdM every minute for any stage users that must be activated [Service] Environment=KRB5_CLIENT_KTNAME=/etc/krb5.ipa-activation.keytab Environment=KRB5CCNAME=FILE:/tmp/krb5cc_ipa-activate-all ExecStart=/usr/local/sbin/ipa-activate-all", "[Unit] Description=Scan IdM every minute for any stage users that must be activated [Timer] OnBootSec=15min OnUnitActiveSec=1min [Install] WantedBy=multi-user.target", "systemctl enable ipa-activate-all.timer", "dn: uid= user_login ,cn=staged users,cn=accounts,cn=provisioning,dc= example ,dc=com changetype: add objectClass: top objectClass: inetorgperson uid: user_login sn: surname givenName: first_name cn: full_name", "dn: uid= user_login ,cn=staged users,cn=accounts,cn=provisioning,dc= example ,dc=com changetype: add objectClass: top objectClass: person objectClass: inetorgperson objectClass: organizationalperson objectClass: posixaccount uid: user_login uidNumber: UID_number gidNumber: GID_number sn: surname givenName: first_name cn: full_name homeDirectory: /home/ user_login", "ldapsearch -LLL -x -D \"uid= user_allowed_to_read ,cn=users,cn=accounts,dc=example, dc=com\" -w \" password \" -H ldap:// server.example.com -b \"cn=users, cn=accounts, dc=example, dc=com\" uid= user_login", "dn: distinguished_name changetype: modify replace: attribute_to_modify attribute_to_modify: new_value", "dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: TRUE", "dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: FALSE", "dn: distinguished_name changetype: modrdn newrdn: uid= user_login deleteoldrdn: 0 newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=example", "dn: cn= group_distinguished_name ,cn=groups,cn=accounts,dc=example,dc=com changetype: add objectClass: top objectClass: ipaobject objectClass: ipausergroup objectClass: groupofnames objectClass: nestedgroup objectClass: posixgroup cn: group_name gidNumber: GID_number", "ldapsearch -YGSSAPI -H ldap:// server.example.com -b \"cn=groups,cn=accounts,dc=example,dc=com\" \"cn= group_name \"", "dn: group_distinguished_name changetype: delete", "dn: group_distinguished_name changetype: modify add: member member: uid= user_login ,cn=users,cn=accounts,dc=example,dc=com", "dn: distinguished_name changetype: modify delete: member member: uid= user_login ,cn=users,cn=accounts,dc=example,dc=com", "ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: admin@EXAMPLE SASL SSF: 56 SASL data security layer installed. dn: uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=example changetype: add objectClass: top objectClass: inetorgperson cn: Stage sn: User adding new entry \"uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=example\"", "ipa stageuser-show stageuser --all --raw dn: uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=example uid: stageuser sn: User cn: Stage has_password: FALSE has_keytab: FALSE nsaccountlock: TRUE objectClass: top objectClass: inetorgperson objectClass: organizationalPerson objectClass: person", "ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: admin@EXAMPLE SASL SSF: 56 SASL data security layer installed. dn: uid=user1,cn=users,cn=accounts,dc=example changetype: modrdn newrdn: uid=user1 deleteoldrdn: 0 newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=example modifying rdn of entry \"uid=user1,cn=users,cn=accounts,dc=example\"", "ipa user-find --preserved=true --------------- 1 user matched --------------- User login: user1 First name: first_name Last name: last_name ---------------------------- Number of entries returned 1 ----------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/provisioning
probe::vm.pagefault
probe::vm.pagefault Name probe::vm.pagefault - Records that a page fault occurred Synopsis vm.pagefault Values address the address of the faulting memory access; i.e. the address that caused the page fault write_access indicates whether this was a write or read access; 1 indicates a write, while 0 indicates a read name name of the probe point Context The process which triggered the fault
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-vm-pagefault
Part I. Choosing a basic or advanced Google Cloud integration
Part I. Choosing a basic or advanced Google Cloud integration To create an Google Cloud integration, first decide if you want to take a basic or advanced integration path. Basic For the basic option, go to Creating a Google Cloud integration: Basic . The basic path enables cost management to directly read your billing reports from GCP at a scope that you indicate. Advanced For the advanced option, go to Creating a Google Cloud integration: Advanced . The advanced path enables you to customize or filter your data before cost management reads it. You might also use the advanced path if you want to share billing data only to certain Red Hat products. The advanced path has more complex setup and configuration. Note You must select either basic or advanced, you cannot choose both.
null
https://docs.redhat.com/en/documentation/hybrid_committed_spend/1-latest/html/integrating_google_cloud_data_into_hybrid_committed_spend/choosing_a_basic_or_advanced_google_cloud_integration
Appendix B. ConnectionBuilder Methods
Appendix B. ConnectionBuilder Methods The following table outlines the key methods available to the ConnectionBuilder class used in V4 of the Java software development kit. Table B.1. ConnectionBuilder Methods Method Argument Type Description user String The name of the user with which to connect to the Manager. You must specify both the user name and domain, such as admin@internal . This method must be used together with the password method. password String The password of the user with which to connect to the Manager. compress Boolean Specifies whether responses from the server where the Manager is hosted should be compressed. This option is disabled by default, so this method is only required to enable this option. timeout Integer The timeout, in seconds, to wait for responses to requests. If a request takes longer than this value to respond, the request is cancelled, and an exception is thrown. This argument is optional. ssoUrl String The base URL of the server where the Manager is hosted. For example, https://server.example.com/ovirt-engine/sso/oauth/token?\grant_type=password&scope=ovirt-app-api for password authentication. ssoRevokeUrl String The base URL of the SSO revoke service. This option only needs to be specified when you use an external authentication service. By default, this URL is automatically calculated from the value of the url option so that SSO token revoke is performed using the SSO service that is part of the engine. ssoTokenName String The token name in the JSON SSO response returned from the SSO server. By default, this value is access_token . insecure Boolean Enables or disables verification of the host name in the SSL certificate presented by the server where the Manager is hosted. By default, the identity of host names is verified, and the connection is rejected if the host name is not correct, so this method is only required to disable this option. trustStoreFile String Specifies the location of a file containing the CA certificate used to verify the certificate presented by the server where the Manager is hosted. This method must be used together with the trustStorePassword method. trustStorePassword String The password used to access the keystore file specified in the trustStorePath method.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/java_sdk_guide/connectionbuilder_methods
Chapter 2. NVIDIA GPU architecture
Chapter 2. NVIDIA GPU architecture NVIDIA supports the use of graphics processing unit (GPU) resources on OpenShift Container Platform. OpenShift Container Platform is a security-focused and hardened Kubernetes platform developed and supported by Red Hat for deploying and managing Kubernetes clusters at scale. OpenShift Container Platform includes enhancements to Kubernetes so that users can easily configure and use NVIDIA GPU resources to accelerate workloads. The NVIDIA GPU Operator uses the Operator framework within OpenShift Container Platform to manage the full lifecycle of NVIDIA software components required to run GPU-accelerated workloads. These components include the NVIDIA drivers (to enable CUDA), the Kubernetes device plugin for GPUs, the NVIDIA Container Toolkit, automatic node tagging using GPU feature discovery (GFD), DCGM-based monitoring, and others. Note The NVIDIA GPU Operator is only supported by NVIDIA. For more information about obtaining support from NVIDIA, see Obtaining Support from NVIDIA . 2.1. NVIDIA GPU prerequisites A working OpenShift cluster with at least one GPU worker node. Access to the OpenShift cluster as a cluster-admin to perform the required steps. OpenShift CLI ( oc ) is installed. The node feature discovery (NFD) Operator is installed and a nodefeaturediscovery instance is created. 2.2. NVIDIA GPU enablement The following diagram shows how the GPU architecture is enabled for OpenShift: Figure 2.1. NVIDIA GPU enablement Note MIG is supported on GPUs starting with the NVIDIA Ampere generation. For a list of GPUs that support MIG, see the NVIDIA MIG User Guide . 2.2.1. GPUs and bare metal You can deploy OpenShift Container Platform on an NVIDIA-certified bare metal server but with some limitations: Control plane nodes can be CPU nodes. Worker nodes must be GPU nodes, provided that AI/ML workloads are executed on these worker nodes. In addition, the worker nodes can host one or more GPUs, but they must be of the same type. For example, a node can have two NVIDIA A100 GPUs, but a node with one A100 GPU and one T4 GPU is not supported. The NVIDIA Device Plugin for Kubernetes does not support mixing different GPU models on the same node. When using OpenShift, note that one or three or more servers are required. Clusters with two servers are not supported. The single server deployment is called single node openShift (SNO) and using this configuration results in a non-high availability OpenShift environment. You can choose one of the following methods to access the containerized GPUs: GPU passthrough Multi-Instance GPU (MIG) Additional resources Red Hat OpenShift on Bare Metal Stack 2.2.2. GPUs and virtualization Many developers and enterprises are moving to containerized applications and serverless infrastructures, but there is still a lot of interest in developing and maintaining applications that run on virtual machines (VMs). Red Hat OpenShift Virtualization provides this capability, enabling enterprises to incorporate VMs into containerized workflows within clusters. You can choose one of the following methods to connect the worker nodes to the GPUs: GPU passthrough to access and use GPU hardware within a virtual machine (VM). GPU (vGPU) time-slicing, when GPU compute capacity is not saturated by workloads. Additional resources NVIDIA GPU Operator with OpenShift Virtualization 2.2.3. GPUs and vSphere You can deploy OpenShift Container Platform on an NVIDIA-certified VMware vSphere server that can host different GPU types. An NVIDIA GPU driver must be installed in the hypervisor in case vGPU instances are used by the VMs. For VMware vSphere, this host driver is provided in the form of a VIB file. The maximum number of vGPUS that can be allocated to worker node VMs depends on the version of vSphere: vSphere 7.0: maximum 4 vGPU per VM vSphere 8.0: maximum 8 vGPU per VM Note vSphere 8.0 introduced support for multiple full or fractional heterogenous profiles associated with a VM. You can choose one of the following methods to attach the worker nodes to the GPUs: GPU passthrough for accessing and using GPU hardware within a virtual machine (VM) GPU (vGPU) time-slicing, when not all of the GPU is needed Similar to bare metal deployments, one or three or more servers are required. Clusters with two servers are not supported. Additional resources OpenShift Container Platform on VMware vSphere with NVIDIA vGPUs 2.2.4. GPUs and Red Hat KVM You can use OpenShift Container Platform on an NVIDIA-certified kernel-based virtual machine (KVM) server. Similar to bare-metal deployments, one or three or more servers are required. Clusters with two servers are not supported. However, unlike bare-metal deployments, you can use different types of GPUs in the server. This is because you can assign these GPUs to different VMs that act as Kubernetes nodes. The only limitation is that a Kubernetes node must have the same set of GPU types at its own level. You can choose one of the following methods to access the containerized GPUs: GPU passthrough for accessing and using GPU hardware within a virtual machine (VM) GPU (vGPU) time-slicing when not all of the GPU is needed To enable the vGPU capability, a special driver must be installed at the host level. This driver is delivered as a RPM package. This host driver is not required at all for GPU passthrough allocation. 2.2.5. GPUs and CSPs You can deploy OpenShift Container Platform to one of the major cloud service providers (CSPs): Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure. Two modes of operation are available: a fully managed deployment and a self-managed deployment. In a fully managed deployment, everything is automated by Red Hat in collaboration with CSP. You can request an OpenShift instance through the CSP web console, and the cluster is automatically created and fully managed by Red Hat. You do not have to worry about node failures or errors in the environment. Red Hat is fully responsible for maintaining the uptime of the cluster. The fully managed services are available on AWS and Azure. For AWS, the OpenShift service is called ROSA (Red Hat OpenShift Service on AWS). For Azure, the service is called Azure Red Hat OpenShift. In a self-managed deployment, you are responsible for instantiating and maintaining the OpenShift cluster. Red Hat provides the OpenShift-install utility to support the deployment of the OpenShift cluster in this case. The self-managed services are available globally to all CSPs. It is important that this compute instance is a GPU-accelerated compute instance and that the GPU type matches the list of supported GPUs from NVIDIA AI Enterprise. For example, T4, V100, and A100 are part of this list. You can choose one of the following methods to access the containerized GPUs: GPU passthrough to access and use GPU hardware within a virtual machine (VM). GPU (vGPU) time slicing when the entire GPU is not required. Additional resources Red Hat Openshift in the Cloud 2.2.6. GPUs and Red Hat Device Edge Red Hat Device Edge provides access to MicroShift. MicroShift provides the simplicity of a single-node deployment with the functionality and services you need for resource-constrained (edge) computing. Red Hat Device Edge meets the needs of bare-metal, virtual, containerized, or Kubernetes workloads deployed in resource-constrained environments. You can enable NVIDIA GPUs on containers in a Red Hat Device Edge environment. You use GPU passthrough to access the containerized GPUs. Additional resources How to accelerate workloads with NVIDIA GPUs on Red Hat Device Edge 2.3. GPU sharing methods Red Hat and NVIDIA have developed GPU concurrency and sharing mechanisms to simplify GPU-accelerated computing on an enterprise-level OpenShift Container Platform cluster. Applications typically have different compute requirements that can leave GPUs underutilized. Providing the right amount of compute resources for each workload is critical to reduce deployment cost and maximize GPU utilization. Concurrency mechanisms for improving GPU utilization exist that range from programming model APIs to system software and hardware partitioning, including virtualization. The following list shows the GPU concurrency mechanisms: Compute Unified Device Architecture (CUDA) streams Time-slicing CUDA Multi-Process Service (MPS) Multi-instance GPU (MIG) Virtualization with vGPU Consider the following GPU sharing suggestions when using the GPU concurrency mechanisms for different OpenShift Container Platform scenarios: Bare metal vGPU is not available. Consider using MIG-enabled cards. VMs vGPU is the best choice. Older NVIDIA cards with no MIG on bare metal Consider using time-slicing. VMs with multiple GPUs and you want passthrough and vGPU Consider using separate VMs. Bare metal with OpenShift Virtualization and multiple GPUs Consider using pass-through for hosted VMs and time-slicing for containers. Additional resources Improving GPU Utilization 2.3.1. CUDA streams Compute Unified Device Architecture (CUDA) is a parallel computing platform and programming model developed by NVIDIA for general computing on GPUs. A stream is a sequence of operations that executes in issue-order on the GPU. CUDA commands are typically executed sequentially in a default stream and a task does not start until a preceding task has completed. Asynchronous processing of operations across different streams allows for parallel execution of tasks. A task issued in one stream runs before, during, or after another task is issued into another stream. This allows the GPU to run multiple tasks simultaneously in no prescribed order, leading to improved performance. Additional resources Asynchronous Concurrent Execution 2.3.2. Time-slicing GPU time-slicing interleaves workloads scheduled on overloaded GPUs when you are running multiple CUDA applications. You can enable time-slicing of GPUs on Kubernetes by defining a set of replicas for a GPU, each of which can be independently distributed to a pod to run workloads on. Unlike multi-instance GPU (MIG), there is no memory or fault isolation between replicas, but for some workloads this is better than not sharing at all. Internally, GPU time-slicing is used to multiplex workloads from replicas of the same underlying GPU. You can apply a cluster-wide default configuration for time-slicing. You can also apply node-specific configurations. For example, you can apply a time-slicing configuration only to nodes with Tesla T4 GPUs and not modify nodes with other GPU models. You can combine these two approaches by applying a cluster-wide default configuration and then labeling nodes to give those nodes a node-specific configuration. 2.3.3. CUDA Multi-Process Service CUDA Multi-Process Service (MPS) allows a single GPU to use multiple CUDA processes. The processes run in parallel on the GPU, eliminating saturation of the GPU compute resources. MPS also enables concurrent execution, or overlapping, of kernel operations and memory copying from different processes to enhance utilization. Additional resources CUDA MPS 2.3.4. Multi-instance GPU Using Multi-instance GPU (MIG), you can split GPU compute units and memory into multiple MIG instances. Each of these instances represents a standalone GPU device from a system perspective and can be connected to any application, container, or virtual machine running on the node. The software that uses the GPU treats each of these MIG instances as an individual GPU. MIG is useful when you have an application that does not require the full power of an entire GPU. The MIG feature of the new NVIDIA Ampere architecture enables you to split your hardware resources into multiple GPU instances, each of which is available to the operating system as an independent CUDA-enabled GPU. NVIDIA GPU Operator version 1.7.0 and higher provides MIG support for the A100 and A30 Ampere cards. These GPU instances are designed to support up to seven multiple independent CUDA applications so that they operate completely isolated with dedicated hardware resources. Additional resources NVIDIA Multi-Instance GPU User Guide 2.3.5. Virtualization with vGPU Virtual machines (VMs) can directly access a single physical GPU using NVIDIA vGPU. You can create virtual GPUs that can be shared by VMs across the enterprise and accessed by other devices. This capability combines the power of GPU performance with the management and security benefits provided by vGPU. Additional benefits provided by vGPU includes proactive management and monitoring for your VM environment, workload balancing for mixed VDI and compute workloads, and resource sharing across multiple VMs. Additional resources Virtual GPUs 2.4. NVIDIA GPU features for OpenShift Container Platform NVIDIA Container Toolkit NVIDIA Container Toolkit enables you to create and run GPU-accelerated containers. The toolkit includes a container runtime library and utilities to automatically configure containers to use NVIDIA GPUs. NVIDIA AI Enterprise NVIDIA AI Enterprise is an end-to-end, cloud-native suite of AI and data analytics software optimized, certified, and supported with NVIDIA-Certified systems. NVIDIA AI Enterprise includes support for Red Hat OpenShift Container Platform. The following installation methods are supported: OpenShift Container Platform on bare metal or VMware vSphere with GPU Passthrough. OpenShift Container Platform on VMware vSphere with NVIDIA vGPU. GPU Feature Discovery NVIDIA GPU Feature Discovery for Kubernetes is a software component that enables you to automatically generate labels for the GPUs available on a node. GPU Feature Discovery uses node feature discovery (NFD) to perform this labeling. The Node Feature Discovery Operator (NFD) manages the discovery of hardware features and configurations in an OpenShift Container Platform cluster by labeling nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, OS version, and so on. You can find the NFD Operator in the Operator Hub by searching for "Node Feature Discovery". NVIDIA GPU Operator with OpenShift Virtualization Up until this point, the GPU Operator only provisioned worker nodes to run GPU-accelerated containers. Now, the GPU Operator can also be used to provision worker nodes for running GPU-accelerated virtual machines (VMs). You can configure the GPU Operator to deploy different software components to worker nodes depending on which GPU workload is configured to run on those nodes. GPU Monitoring dashboard You can install a monitoring dashboard to display GPU usage information on the cluster Observe page in the OpenShift Container Platform web console. GPU utilization information includes the number of available GPUs, power consumption (in watts), temperature (in degrees Celsius), utilization (in percent), and other metrics for each GPU. Additional resources NVIDIA-Certified Systems NVIDIA AI Enterprise NVIDIA Container Toolkit Enabling the GPU Monitoring Dashboard MIG Support in OpenShift Container Platform Time-slicing NVIDIA GPUs in OpenShift Deploy GPU Operators in a disconnected or airgapped environment Node Feature Discovery Operator
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/hardware_accelerators/nvidia-gpu-architecture
Chapter 7. Advisories related to this release
Chapter 7. Advisories related to this release The following advisories have been issued to document enhancements, bugfixes, and CVE fixes included in this release. RHSA-2023:0577
null
https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/release_notes_for_eclipse_vert.x_4.3/advisories-related-to-current-release-vertx
8.158. openslp
8.158. openslp 8.158.1. RHBA-2014:1482 - openslp bug fix and enhancement update Updated openslp packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. OpenSLP is an open source implementation of the Service Location Protocol (SLP) which is an Internet Engineering Task Force (IETF) standards track protocol and provides a framework to allow networking applications to discover the existence, location, and configuration of networked services in enterprise networks. Note The openslp packages have been upgraded to upstream version 2.0.0, which provides a number of bug fixes and enhancements over the version. (BZ# 1065558 ) Users of openslp are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/openslp
Chapter 4. Installing a cluster
Chapter 4. Installing a cluster 4.1. Cleaning up installations In case of an earlier failed deployment, remove the artifacts from the failed attempt before trying to deploy OpenShift Container Platform again. Procedure Power off all bare-metal nodes before installing the OpenShift Container Platform cluster by using the following command: USD ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off Remove all old bootstrap resources if any remain from an earlier deployment attempt by using the following script: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done Delete the artifacts that the earlier installation generated by using the following command: USD cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json \ .openshift_install.log .openshift_install_state.json Re-create the OpenShift Container Platform manifests by using the following command: USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests 4.2. Deploying the cluster via the OpenShift Container Platform installer Run the OpenShift Container Platform installer: USD ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster 4.3. Following the progress of the installation During the deployment process, you can check the installation's overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder: USD tail -f /path/to/install-dir/.openshift_install.log 4.4. Verifying static IP address configuration If the DHCP reservation for a cluster node specifies an infinite lease, after the installer successfully provisions the node, the dispatcher script checks the node's network configuration. If the script determines that the network configuration contains an infinite DHCP lease, it creates a new connection using the IP address of the DHCP lease as a static IP address. Note The dispatcher script might run on successfully provisioned nodes while the provisioning of other nodes in the cluster is ongoing. Verify the network configuration is working properly. Procedure Check the network interface configuration on the node. Turn off the DHCP server and reboot the OpenShift Container Platform node and ensure that the network configuration works properly. 4.5. Additional resources Understanding update channels and releases
[ "ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json .openshift_install.log .openshift_install_state.json", "./openshift-baremetal-install --dir ~/clusterconfigs create manifests", "./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster", "tail -f /path/to/install-dir/.openshift_install.log" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installing-a-cluster
4.345. wireshark
4.345. wireshark 4.345.1. RHSA-2012:0509 - Moderate: wireshark security update Updated wireshark packages that fix several security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Wireshark is a program for monitoring network traffic. Wireshark was previously known as Ethereal. Security Fixes CVE-2011-1590 , CVE-2011-4102 , CVE-2012-1595 Several flaws were found in Wireshark. If Wireshark read a malformed packet off a network or opened a malicious dump file, it could crash or, possibly, execute arbitrary code as the user running Wireshark. CVE-2011-1143 , CVE-2011-1957 , CVE-2011-1958 , CVE-2011-1959 , CVE-2011-2174 , CVE-2011-2175 , CVE-2011-2597 , CVE-2011-2698 , CVE-2012-0041 , CVE-2012-0042 , CVE-2012-0067 , CVE-2012-0066 Several denial of service flaws were found in Wireshark. Wireshark could crash or stop responding if it read a malformed packet off a network, or opened a malicious dump file. Users of Wireshark should upgrade to these updated packages, which contain backported patches to correct these issues. All running instances of Wireshark must be restarted for the update to take effect. 4.345.2. RHEA-2011:1772 - wireshark enhancement update An updated wireshark package that provides one enhancement is now available for Red Hat Enterprise Linux 6. Wireshark, previously known as Ethereal, is a network protocol analyzer. It is used to capture and browse the traffic running on a computer network. Enhancement BZ# 746839 Prior to this update, Wireshark did not show traffic information for the Network File System (NFS) version 4.1 protocol. With this update, the NFS packet dissector is enhanced so that Wireshark correctly displays traffic for this protocol. Note that NFS version 4.1 is introduced as a Technology Preview for Red Hat Enterprise Linux 6. Users of wireshark are advised to upgrade to this updated package, which adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/wireshark
Chapter 13. Custom resource API reference
Chapter 13. Custom resource API reference 13.1. Common configuration properties Common configuration properties apply to more than one resource. 13.1.1. replicas Use the replicas property to configure replicas. The type of replication depends on the resource. KafkaTopic uses a replication factor to configure the number of replicas of each partition within a Kafka cluster. Kafka components use replicas to configure the number of pods in a deployment to provide better availability and scalability. Note When running a Kafka component on OpenShift it may not be necessary to run multiple replicas for high availability. When the node where the component is deployed crashes, OpenShift will automatically reschedule the Kafka component pod to a different node. However, running Kafka components with multiple replicas can provide faster failover times as the other nodes will be up and running. 13.1.2. bootstrapServers Use the bootstrapServers property to configure a list of bootstrap servers. The bootstrap server lists can refer to Kafka clusters that are not deployed in the same OpenShift cluster. They can also refer to a Kafka cluster not deployed by AMQ Streams. If on the same OpenShift cluster, each list must ideally contain the Kafka cluster bootstrap service which is named CLUSTER-NAME -kafka-bootstrap and a port number. If deployed by AMQ Streams but on different OpenShift clusters, the list content depends on the approach used for exposing the clusters (routes, ingress, nodeports or loadbalancers). When using Kafka with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of the given cluster. 13.1.3. ssl Use the three allowed ssl configuration options for client connection using a specific cipher suite for a TLS version. A cipher suite combines algorithms for secure connection and data transfer. You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Example SSL configuration # ... spec: config: ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" 1 ssl.enabled.protocols: "TLSv1.2" 2 ssl.protocol: "TLSv1.2" 3 ssl.endpoint.identification.algorithm: HTTPS 4 # ... 1 The cipher suite for TLS using a combination of ECDHE key exchange mechanism, RSA authentication algorithm, AES bulk encyption algorithm and SHA384 MAC algorithm. 2 The SSl protocol TLSv1.2 is enabled. 3 Specifies the TLSv1.2 protocol to generate the SSL context. Allowed values are TLSv1.1 and TLSv1.2 . 4 Hostname verification is enabled by setting to HTTPS . An empty string disables the verification. 13.1.4. trustedCertificates Having set tls to configure TLS encryption, use the trustedCertificates property to provide a list of secrets with key names under which the certificates are stored in X.509 format. You can use the secrets created by the Cluster Operator for the Kafka cluster, or you can create your own TLS certificate file, then create a Secret from the file: oc create secret generic MY-SECRET \ --from-file= MY-TLS-CERTIFICATE-FILE.crt Example TLS encryption configuration tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt If certificates are stored in the same secret, it can be listed multiple times. If you want to enable TLS, but use the default set of public certification authorities shipped with Java, you can specify trustedCertificates as an empty array: Example of enabling TLS with the default Java certificates tls: trustedCertificates: [] For information on configuring TLS client authentication, see KafkaClientAuthenticationTls schema reference . 13.1.5. resources You request CPU and memory resources for components. Limits specify the maximum resources that can be consumed by a given container. Resource requests and limits for the Topic Operator and User Operator are set in the Kafka resource. Use the reources.requests and resources.limits properties to configure resource requests and limits. For every deployed container, AMQ Streams allows you to request specific resources and define the maximum consumption of those resources. AMQ Streams supports requests and limits for the following types of resources: cpu memory AMQ Streams uses the OpenShift syntax for specifying these resources. For more information about managing computing resources on OpenShift, see Managing Compute Resources for Containers . Resource requests Requests specify the resources to reserve for a given container. Reserving the resources ensures that they are always available. Important If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled. A request may be configured for one or more supported resources. Example resource requests configuration # ... resources: requests: cpu: 12 memory: 64Gi # ... Resource limits Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not always be available. A container can use the resources up to the limit only when they are available. Resource limits should be always higher than the resource requests. A resource may be configured for one or more supported limits. Example resource limits configuration # ... resources: limits: cpu: 12 memory: 64Gi # ... Supported CPU formats CPU requests and limits are supported in the following formats: Number of CPU cores as integer ( 5 CPU core) or decimal ( 2.5 CPU core). Number or millicpus / millicores ( 100m ) where 1000 millicores is the same 1 CPU core. Example CPU units # ... resources: requests: cpu: 500m limits: cpu: 2.5 # ... Note The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed. For more information on CPU specification, see the Meaning of CPU . Supported memory formats Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes. To specify memory in megabytes, use the M suffix. For example 1000M . To specify memory in gigabytes, use the G suffix. For example 1G . To specify memory in mebibytes, use the Mi suffix. For example 1000Mi . To specify memory in gibibytes, use the Gi suffix. For example 1Gi . Example resources using different memory units # ... resources: requests: memory: 512Mi limits: memory: 2Gi # ... For more details about memory specification and additional supported units, see Meaning of memory . 13.1.6. image Use the image property to configure the container image used by the component. Overriding container images is recommended only in special situations where you need to use a different container registry or a customized image. For example, if your network does not allow access to the container repository used by AMQ Streams, you can copy the AMQ Streams images or build them from the source. However, if the configured image is not compatible with AMQ Streams images, it might not work properly. A copy of the container image might also be customized and used for debugging. You can specify which container image to use for a component using the image property in the following resources: Kafka.spec.kafka Kafka.spec.zookeeper Kafka.spec.entityOperator.topicOperator Kafka.spec.entityOperator.userOperator Kafka.spec.entityOperator.tlsSidecar KafkaConnect.spec KafkaConnectS2I.spec KafkaMirrorMaker.spec KafkaMirrorMaker2.spec KafkaBridge.spec Configuring the image property for Kafka, Kafka Connect, and Kafka MirrorMaker Kafka, Kafka Connect (including Kafka Connect with S2I support), and Kafka MirrorMaker support multiple versions of Kafka. Each component requires its own image. The default images for the different Kafka versions are configured in the following environment variables: STRIMZI_KAFKA_IMAGES STRIMZI_KAFKA_CONNECT_IMAGES STRIMZI_KAFKA_CONNECT_S2I_IMAGES STRIMZI_KAFKA_MIRROR_MAKER_IMAGES These environment variables contain mappings between the Kafka versions and their corresponding images. The mappings are used together with the image and version properties: If neither image nor version are given in the custom resource then the version will default to the Cluster Operator's default Kafka version, and the image will be the one corresponding to this version in the environment variable. If image is given but version is not, then the given image is used and the version is assumed to be the Cluster Operator's default Kafka version. If version is given but image is not, then the image that corresponds to the given version in the environment variable is used. If both version and image are given, then the given image is used. The image is assumed to contain a Kafka image with the given version. The image and version for the different components can be configured in the following properties: For Kafka in spec.kafka.image and spec.kafka.version . For Kafka Connect, Kafka Connect S2I, and Kafka MirrorMaker in spec.image and spec.version . Warning It is recommended to provide only the version and leave the image property unspecified. This reduces the chance of making a mistake when configuring the custom resource. If you need to change the images used for different versions of Kafka, it is preferable to configure the Cluster Operator's environment variables. Configuring the image property in other resources For the image property in the other custom resources, the given value will be used during deployment. If the image property is missing, the image specified in the Cluster Operator configuration will be used. If the image name is not defined in the Cluster Operator configuration, then the default value will be used. For Topic Operator: Container image specified in the STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-rhel8-operator:1.8.4 container image. For User Operator: Container image specified in the STRIMZI_DEFAULT_USER_OPERATOR_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-rhel8-operator:1.8.4 container image. For Entity Operator TLS sidecar: Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:1.8.4 container image. For Kafka Exporter: Container image specified in the STRIMZI_DEFAULT_KAFKA_EXPORTER_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:1.8.4 container image. For Kafka Bridge: Container image specified in the STRIMZI_DEFAULT_KAFKA_BRIDGE_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-bridge-rhel8:1.8.4 container image. For Kafka broker initializer: Container image specified in the STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable from the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-rhel8-operator:1.8.4 container image. Example of container image configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ... 13.1.7. livenessProbe and readinessProbe healthchecks Use the livenessProbe and readinessProbe properties to configure healthcheck probes supported in AMQ Streams. Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it. For more details about the probes, see Configure Liveness and Readiness Probes . Both livenessProbe and readinessProbe support the following options: initialDelaySeconds timeoutSeconds periodSeconds successThreshold failureThreshold Example of liveness and readiness probe configuration # ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... For more information about the livenessProbe and readinessProbe options, see Probe schema reference . 13.1.8. metricsConfig Use the metricsConfig property to enable and configure Prometheus metrics. The metricsConfig property contains a reference to a ConfigMap containing additional configuration for the Prometheus JMX exporter . AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and ZooKeeper to Prometheus metrics. To enable Prometheus metrics export without further configuration, you can reference a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . When referencing an empty file, all metrics are exposed as long as they have not been renamed. Example ConfigMap with metrics configuration for Kafka kind: ConfigMap apiVersion: v1 metadata: name: my-configmap data: my-key: | lowercaseOutputName: true rules: # Special cases and very specific rules - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value name: kafka_server_USD1_USD2 type: GAUGE labels: clientId: "USD3" topic: "USD4" partition: "USD5" # further configuration Example metrics configuration for Kafka apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # ... zookeeper: # ... When metrics are enabled, they are exposed on port 9404. When the metricsConfig (or deprecated metrics ) property is not defined in the resource, the Prometheus metrics are disabled. For more information about setting up and deploying Prometheus and Grafana, see Introducing Metrics to Kafka in the Deploying and Upgrading AMQ Streams on OpenShift guide. 13.1.9. jvmOptions The following AMQ Streams components run inside a Java Virtual Machine (JVM): Apache Kafka Apache ZooKeeper Apache Kafka Connect Apache Kafka MirrorMaker AMQ Streams Kafka Bridge To optimize their performance on different platforms and architectures, you configure the jvmOptions property in the following resources: Kafka.spec.kafka Kafka.spec.zookeeper KafkaConnect.spec KafkaConnectS2I.spec KafkaMirrorMaker.spec KafkaMirrorMaker2.spec KafkaBridge.spec You can specify the following options in your configuration: -Xms Minimum initial allocation heap size when the JVM starts. -Xmx Maximum heap size. -XX Advanced runtime options for the JVM. javaSystemProperties Additional system properties. gcLoggingEnabled Enables garbage collector logging . The full schema of jvmOptions is described in JvmOptions schema reference . Note The units accepted by JVM settings, such as -Xmx and -Xms , are the same units accepted by the JDK java binary in the corresponding image. Therefore, 1g or 1G means 1,073,741,824 bytes, and Gi is not a valid unit suffix. This is different from the units used for memory requests and limits , which follow the OpenShift convention where 1G means 1,000,000,000 bytes, and 1Gi means 1,073,741,824 bytes -Xms and -Xmx options The default values used for -Xms and -Xmx depend on whether there is a memory request limit configured for the container. If there is a memory limit, the JVM's minimum and maximum memory is set to a value corresponding to the limit. If there is no memory limit, the JVM's minimum memory is set to 128M . The JVM's maximum memory is not defined to allow the memory to increase as needed. This is ideal for single node environments in test and development. Before setting -Xmx explicitly consider the following: The JVM's overall memory usage will be approximately 4 x the maximum heap, as configured by -Xmx . If -Xmx is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure from other Pods running on it. If -Xmx is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash immediately if -Xms is set to -Xmx , or at a later time if not. It is recommended to: Set the memory request and the memory limit to the same value Use a memory request that is at least 4.5 x the -Xmx Consider setting -Xms to the same value as -Xmx In this example, the JVM uses 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage is approximately 8GiB. Example -Xmx and -Xms configuration # ... jvmOptions: "-Xmx": "2g" "-Xms": "2g" # ... Setting the same value for initial ( -Xms ) and maximum ( -Xmx ) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. Important Containers performing lots of disk I/O, such as Kafka broker containers, require available memory for use as an operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM. -XX option -XX options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS option of Apache Kafka. Example -XX configuration jvmOptions: "-XX": "UseG1GC": true "MaxGCPauseMillis": 20 "InitiatingHeapOccupancyPercent": 35 "ExplicitGCInvokesConcurrent": true JVM options resulting from the -XX configuration Note When no -XX options are specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS is used. javaSystemProperties javaSystemProperties are used to configure additional Java system properties, such as debugging utilities. Example javaSystemProperties configuration jvmOptions: javaSystemProperties: - name: javax.net.debug value: ssl 13.1.10. Garbage collector logging The jvmOptions property also allows you to enable and disable garbage collector (GC) logging. GC logging is disabled by default. To enable it, set the gcLoggingEnabled property as follows: Example GC logging configuration # ... jvmOptions: gcLoggingEnabled: true # ... 13.2. Schema properties 13.2.1. Kafka schema reference Property Description spec The specification of the Kafka and ZooKeeper clusters, and Topic Operator. KafkaSpec status The status of the Kafka and ZooKeeper clusters, and Topic Operator. KafkaStatus 13.2.2. KafkaSpec schema reference Used in: Kafka Property Description kafka Configuration of the Kafka cluster. KafkaClusterSpec zookeeper Configuration of the ZooKeeper cluster. ZookeeperClusterSpec entityOperator Configuration of the Entity Operator. EntityOperatorSpec clusterCa Configuration of the cluster certificate authority. CertificateAuthority clientsCa Configuration of the clients certificate authority. CertificateAuthority cruiseControl Configuration for Cruise Control deployment. Deploys a Cruise Control instance when specified. CruiseControlSpec kafkaExporter Configuration of the Kafka Exporter. Kafka Exporter can provide additional metrics, for example lag of consumer group at topic/partition. KafkaExporterSpec maintenanceTimeWindows A list of time windows for maintenance tasks (that is, certificates renewal). Each time window is defined by a cron expression. string array 13.2.3. KafkaClusterSpec schema reference Used in: KafkaSpec Full list of KafkaClusterSpec schema properties Configures a Kafka cluster. 13.2.3.1. listeners Use the listeners property to configure listeners to provide access to Kafka brokers. Example configuration of a plain (unencrypted) listener without authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: plain port: 9092 type: internal tls: false # ... zookeeper: # ... 13.2.3.2. config Use the config properties to configure Kafka broker options as keys. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams. Configuration options that cannot be configured relate to: Security (Encryption, Authentication, and Authorization) Listener configuration Broker ID configuration Configuration of log data directories Inter-broker communication ZooKeeper connectivity The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the Apache Kafka documentation with the exception of those options that are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: listeners advertised. broker. listener. host.name port inter.broker.listener.name sasl. ssl. security. password. principal.builder.class log.dir zookeeper.connect zookeeper.set.acl authorizer. super.user When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other supported options are passed to Kafka. There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the zookeeper.connection.timeout.ms property to set the maximum time allowed for establishing a ZooKeeper connection. Example Kafka broker configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" zookeeper.connection.timeout.ms: 6000 # ... 13.2.3.3. brokerRackInitImage When rack awareness is enabled, Kafka broker pods use init container to collect the labels from the OpenShift cluster nodes. The container image used for this container can be configured using the brokerRackInitImage property. When the brokerRackInitImage field is missing, the following images are used in order of priority: Container image specified in STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable in the Cluster Operator configuration. registry.redhat.io/amq7/amq-streams-rhel8-operator:1.8.4 container image. Example brokerRackInitImage configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest # ... Note Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container registry used by AMQ Streams. In this case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly. 13.2.3.4. logging Kafka has its own configurable loggers: log4j.logger.org.I0Itec.zkclient.ZkClient log4j.logger.org.apache.zookeeper log4j.logger.kafka log4j.logger.org.apache.kafka log4j.logger.kafka.request.logger log4j.logger.kafka.network.Processor log4j.logger.kafka.server.KafkaApis log4j.logger.kafka.network.RequestChannelUSD log4j.logger.kafka.controller log4j.logger.kafka.log.LogCleaner log4j.logger.state.change.logger log4j.logger.kafka.authorizer.logger Kafka uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: # ... logging: type: inline loggers: kafka.root.logger.level: "INFO" # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties # ... Any available loggers that are not configured have their level set to OFF . If Kafka was deployed using the Cluster Operator, changes to Kafka logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.3.5. KafkaClusterSpec schema properties Property Description version The kafka broker version. Defaults to 2.8.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string replicas The number of pods in the cluster. integer image The docker image for the pods. The default value depends on the configured Kafka.spec.kafka.version . string listeners Configures listeners of Kafka brokers. GenericKafkaListener array config Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., principal.builder.class, log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers (with the exception of: zookeeper.connection.timeout.ms, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols,cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms,cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms,cruise.control.metrics.topic.min.insync.replicas). map storage Storage configuration (disk). Cannot be updated. The type depends on the value of the storage.type property within the given object, which must be one of [ephemeral, persistent-claim, jbod]. EphemeralStorage , PersistentClaimStorage , JbodStorage authorization Authorization configuration for Kafka brokers. The type depends on the value of the authorization.type property within the given object, which must be one of [simple, opa, keycloak, custom]. KafkaAuthorizationSimple , KafkaAuthorizationOpa , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom rack Configuration of the broker.rack broker config. Rack brokerRackInitImage The image of the init container used for initializing the broker.rack . string livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions jmxOptions JMX Options for Kafka brokers. KafkaJmxOptions resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics logging Logging configuration for Kafka. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging template Template for Kafka cluster resources. The template allows users to specify how are the StatefulSet , Pods and Services generated. KafkaClusterTemplate 13.2.4. GenericKafkaListener schema reference Used in: KafkaClusterSpec Full list of GenericKafkaListener schema properties Configures listeners to connect to Kafka brokers within and outside OpenShift. You configure the listeners in the Kafka resource. Example Kafka resource showing listener configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: #... listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #... 13.2.4.1. listeners You configure Kafka broker listeners using the listeners property in the Kafka resource. Listeners are defined as an array. Example listener configuration listeners: - name: plain port: 9092 type: internal tls: false The name and port must be unique within the Kafka cluster. The name can be up to 25 characters long, comprising lower-case letters and numbers. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. By specifying a unique name and port for each listener, you can configure multiple listeners. 13.2.4.2. type The type is set as internal , or for external listeners, as route , loadbalancer , nodeport or ingress . internal You can configure internal listeners with or without encryption using the tls property. Example internal listener configuration #... spec: kafka: #... listeners: #... - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #... route Configures an external listener to expose Kafka using OpenShift Routes and the HAProxy router. A dedicated Route is created for every Kafka broker pod. An additional Route is created to serve as a Kafka bootstrap address. Kafka clients can use these Routes to connect to Kafka on port 443. The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example. Example route listener configuration #... spec: kafka: #... listeners: #... - name: external1 port: 9094 type: route tls: true #... ingress Configures an external listener to expose Kafka using Kubernetes Ingress and the NGINX Ingress Controller for Kubernetes . A dedicated Ingress resource is created for every Kafka broker pod. An additional Ingress resource is created to serve as a Kafka bootstrap address. Kafka clients can use these Ingress resources to connect to Kafka on port 443. The client connects on port 443, the default controller port, but traffic is then routed to the port you configure, which is 9095 in the following example. You must specify the hostnames used by the bootstrap and per-broker services using GenericKafkaListenerConfigurationBootstrap and GenericKafkaListenerConfigurationBroker properties. Example ingress listener configuration #... spec: kafka: #... listeners: #... - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #... Note External listeners using Ingress are currently only tested with the NGINX Ingress Controller for Kubernetes . loadbalancer Configures an external listener to expose Kafka Loadbalancer type Services . A new loadbalancer service is created for every Kafka broker pod. An additional loadbalancer is created to serve as a Kafka bootstrap address. Loadbalancers listen to the specified port number, which is port 9094 in the following example. You can use the loadBalancerSourceRanges property to configure source ranges to restrict access to the specified IP addresses. Example loadbalancer listener configuration #... spec: kafka: #... listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #... nodeport Configures an external listener to expose Kafka using NodePort type Services . Kafka clients connect directly to the nodes of OpenShift. An additional NodePort type of service is created to serve as a Kafka bootstrap address. When configuring the advertised addresses for the Kafka broker pods, AMQ Streams uses the address of the node on which the given pod is running. You can use preferredNodePortAddressType property to configure the first address type checked as the node address . Example nodeport listener configuration #... spec: kafka: #... listeners: #... - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #... Note TLS hostname verification is not currently supported when exposing Kafka clusters using node ports. 13.2.4.3. port The port number is the port used in the Kafka cluster, which might not be the same port used for access by a client. loadbalancer listeners use the specified port number, as do internal listeners ingress and route listeners use port 443 for access nodeport listeners use the port number assigned by OpenShift For client connection, use the address and port for the bootstrap service of the listener. You can retrieve this from the status of the Kafka resource. Example command to retrieve the address and port for client connection oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}' Note Listeners cannot be configured to use the ports set aside for interbroker communication (9091) and metrics (9404). 13.2.4.4. tls The TLS property is required. By default, TLS encryption is not enabled. To enable it, set the tls property to true . TLS encryption is always used with route listeners. 13.2.4.5. authentication Authentication for the listener can be specified as: Mutual TLS ( tls ) SCRAM-SHA-512 ( scram-sha-512 ) Token-based OAuth 2.0 ( oauth ). 13.2.4.6. networkPolicyPeers Use networkPolicyPeers to configure network policies that restrict access to a listener at the network level. The following example shows a networkPolicyPeers configuration for a plain and a tls listener. listeners: #... - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2 # ... In the example: Only application pods matching the labels app: kafka-sasl-consumer and app: kafka-sasl-producer can connect to the plain listener. The application pods must be running in the same namespace as the Kafka broker. Only application pods running in namespaces matching the labels project: myproject and project: myproject2 can connect to the tls listener. The syntax of the networkPolicyPeers field is the same as the from field in NetworkPolicy resources. 13.2.4.7. GenericKafkaListener schema properties Property Description name Name of the listener. The name will be used to identify the listener and the related OpenShift objects. The name has to be unique within given a Kafka cluster. The name can consist of lowercase characters and numbers and be up to 11 characters long. string port Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. integer type Type of the listener. Currently the supported types are internal , route , loadbalancer , nodeport and ingress . internal type exposes Kafka internally only within the OpenShift cluster. route type uses OpenShift Routes to expose Kafka. loadbalancer type uses LoadBalancer type services to expose Kafka. nodeport type uses NodePort type services to expose Kafka. ingress type uses OpenShift Nginx Ingress to expose Kafka. string (one of [ingress, internal, route, loadbalancer, nodeport]) tls Enables TLS encryption on the listener. This is a required property. boolean authentication Authentication configuration for this listener. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth]. KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth configuration Additional listener configuration. GenericKafkaListenerConfiguration networkPolicyPeers List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer . NetworkPolicyPeer array 13.2.5. KafkaListenerAuthenticationTls schema reference Used in: GenericKafkaListener The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationTls type from KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth . It must have the value tls for the type KafkaListenerAuthenticationTls . Property Description type Must be tls . string 13.2.6. KafkaListenerAuthenticationScramSha512 schema reference Used in: GenericKafkaListener The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationScramSha512 type from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationOAuth . It must have the value scram-sha-512 for the type KafkaListenerAuthenticationScramSha512 . Property Description type Must be scram-sha-512 . string 13.2.7. KafkaListenerAuthenticationOAuth schema reference Used in: GenericKafkaListener The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationOAuth type from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 . It must have the value oauth for the type KafkaListenerAuthenticationOAuth . Property Description accessTokenIsJwt Configure whether the access token is treated as JWT. This must be set to false if the authorization server returns opaque tokens. Defaults to true . boolean checkAccessTokenType Configure whether the access token type check is performed or not. This should be set to false if the authorization server does not include 'typ' claim in JWT token. Defaults to true . boolean checkAudience Enable or disable audience checking. Audience checks identify the recipients of tokens. If audience checking is enabled, the OAuth Client ID also has to be configured using the clientId property. The Kafka broker will reject tokens that do not have its clientId in their aud (audience) claim.Default value is false . boolean checkIssuer Enable or disable issuer checking. By default issuer is checked using the value configured by validIssuerUri . Default value is true . boolean clientAudience The audience to use when making requests to the authorization server's token endpoint. Used for inter-broker authentication and for configuring OAuth 2.0 over PLAIN using the clientId and secret method. string clientId OAuth Client ID which the Kafka broker can use to authenticate against the authorization server and use the introspect endpoint URI. string clientScope The scope to use when making requests to the authorization server's token endpoint. Used for inter-broker authentication and for configuring OAuth 2.0 over PLAIN using the clientId and secret method. string clientSecret Link to OpenShift Secret containing the OAuth client secret which the Kafka broker can use to authenticate against the authorization server and use the introspect endpoint URI. GenericSecretSource customClaimCheck JsonPath filter query to be applied to the JWT token or to the response of the introspection endpoint for additional token validation. Not set by default. string disableTlsHostnameVerification Enable or disable TLS hostname verification. Default value is false . boolean enableECDSA The enableECDSA property has been deprecated. Enable or disable ECDSA support by installing BouncyCastle crypto provider. ECDSA support is always enabled. The BouncyCastle libraries are no longer packaged with AMQ Streams. Value is ignored. boolean enableOauthBearer Enable or disable OAuth authentication over SASL_OAUTHBEARER. Default value is true . boolean enablePlain Enable or disable OAuth authentication over SASL_PLAIN. There is no re-authentication support when this mechanism is used. Default value is false . boolean fallbackUserNameClaim The fallback username claim to be used for the user id if the claim specified by userNameClaim is not present. This is useful when client_credentials authentication only results in the client id being provided in another claim. It only takes effect if userNameClaim is set. string fallbackUserNamePrefix The prefix to use with the value of fallbackUserNameClaim to construct the user id. This only takes effect if fallbackUserNameClaim is true, and the value is present for the claim. Mapping usernames and client ids into the same user id space is useful in preventing name collisions. string introspectionEndpointUri URI of the token introspection endpoint which can be used to validate opaque non-JWT tokens. string jwksEndpointUri URI of the JWKS certificate endpoint, which can be used for local JWT validation. string jwksExpirySeconds Configures how often are the JWKS certificates considered valid. The expiry interval has to be at least 60 seconds longer then the refresh interval specified in jwksRefreshSeconds . Defaults to 360 seconds. integer jwksMinRefreshPauseSeconds The minimum pause between two consecutive refreshes. When an unknown signing key is encountered the refresh is scheduled immediately, but will always wait for this minimum pause. Defaults to 1 second. integer jwksRefreshSeconds Configures how often are the JWKS certificates refreshed. The refresh interval has to be at least 60 seconds shorter then the expiry interval specified in jwksExpirySeconds . Defaults to 300 seconds. integer maxSecondsWithoutReauthentication Maximum number of seconds the authenticated session remains valid without re-authentication. This enables Apache Kafka re-authentication feature, and causes sessions to expire when the access token expires. If the access token expires before max time or if max time is reached, the client has to re-authenticate, otherwise the server will drop the connection. Not set by default - the authenticated session does not expire when the access token expires. This option only applies to SASL_OAUTHBEARER authentication mechanism (when enableOauthBearer is true ). integer tlsTrustedCertificates Trusted certificates for TLS connection to the OAuth server. CertSecretSource array tokenEndpointUri URI of the Token Endpoint to use with SASL_PLAIN mechanism when the client authenticates with clientId and a secret . If set, the client can authenticate over SASL_PLAIN by either setting username to clientId , and setting password to client secret , or by setting username to account username, and password to access token prefixed with USDaccessToken: . If this option is not set, the password is always interpreted as an access token (without a prefix), and username as the account username (a so called 'no-client-credentials' mode). string type Must be oauth . string userInfoEndpointUri URI of the User Info Endpoint to use as a fallback to obtaining the user id when the Introspection Endpoint does not return information that can be used for the user id. string userNameClaim Name of the claim from the JWT authentication token, Introspection Endpoint response or User Info Endpoint response which will be used to extract the user id. Defaults to sub . string validIssuerUri URI of the token issuer used for authentication. string validTokenType Valid value for the token_type attribute returned by the Introspection Endpoint. No default value, and not checked by default. string 13.2.8. GenericSecretSource schema reference Used in: KafkaClientAuthenticationOAuth , KafkaListenerAuthenticationOAuth Property Description key The key under which the secret value is stored in the OpenShift Secret. string secretName The name of the OpenShift Secret containing the secret value. string 13.2.9. CertSecretSource schema reference Used in: KafkaAuthorizationKeycloak , KafkaBridgeTls , KafkaClientAuthenticationOAuth , KafkaConnectTls , KafkaListenerAuthenticationOAuth , KafkaMirrorMaker2Tls , KafkaMirrorMakerTls Property Description certificate The name of the file certificate in the Secret. string secretName The name of the Secret containing the certificate. string 13.2.10. GenericKafkaListenerConfiguration schema reference Used in: GenericKafkaListener Full list of GenericKafkaListenerConfiguration schema properties Configuration for Kafka listeners. 13.2.10.1. brokerCertChainAndKey The brokerCertChainAndKey property is only used with listeners that have TLS encryption enabled. You can use the property to providing your own Kafka listener certificates. Example configuration for a loadbalancer external listener with TLS encryption enabled listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ... 13.2.10.2. externalTrafficPolicy The externalTrafficPolicy property is used with loadbalancer and nodeport listeners. When exposing Kafka outside of OpenShift you can choose Local or Cluster . Local avoids hops to other nodes and preserves the client IP, whereas Cluster does neither. The default is Cluster . 13.2.10.3. loadBalancerSourceRanges The loadBalancerSourceRanges property is only used with loadbalancer listeners. When exposing Kafka outside of OpenShift use source ranges, in addition to labels and annotations, to customize how a service is created. Example source ranges configured for a loadbalancer listener listeners: #... - name: external port: 9094 type: loadbalancer tls: false configuration: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 # ... # ... 13.2.10.4. class The class property is only used with ingress listeners. You can configure the Ingress class using the class property. Example of an external listener of type ingress using Ingress class nginx-internal listeners: #... - name: external port: 9094 type: ingress tls: true configuration: class: nginx-internal # ... # ... 13.2.10.5. preferredNodePortAddressType The preferredNodePortAddressType property is only used with nodeport listeners. Use the preferredNodePortAddressType property in your listener configuration to specify the first address type checked as the node address. This property is useful, for example, if your deployment does not have DNS support, or you only want to expose a broker internally through an internal DNS or IP address. If an address of this type is found, it is used. If the preferred address type is not found, AMQ Streams proceeds through the types in the standard order of priority: ExternalDNS ExternalIP Hostname InternalDNS InternalIP Example of an external listener configured with a preferred node port address type listeners: #... - name: external port: 9094 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS # ... # ... 13.2.10.6. useServiceDnsDomain The useServiceDnsDomain property is only used with internal listeners. It defines whether the fully-qualified DNS names that include the cluster service suffix (usually .cluster.local ) are used. With useServiceDnsDomain set as false , the advertised addresses are generated without the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc . With useServiceDnsDomain set as true , the advertised addresses are generated with the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc.cluster.local . Default is false . Example of an internal listener configured to use the Service DNS domain listeners: #... - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true # ... # ... If your OpenShift cluster uses a different service suffix than .cluster.local , you can configure the suffix using the KUBERNETES_SERVICE_DNS_DOMAIN environment variable in the Cluster Operator configuration. See Section 5.1.1, "Cluster Operator configuration" for more details. 13.2.10.7. GenericKafkaListenerConfiguration schema properties Property Description brokerCertChainAndKey Reference to the Secret which holds the certificate and private key pair which will be used for this listener. The certificate can optionally contain the whole chain. This field can be used only with listeners with enabled TLS encryption. CertAndKeySecretSource externalTrafficPolicy Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. Cluster may cause a second hop to another node and obscures the client source IP. Local avoids a second hop for LoadBalancer and Nodeport type services and preserves the client source IP (when supported by the infrastructure). If unspecified, OpenShift will use Cluster as the default.This field can be used only with loadbalancer or nodeport type listener. string (one of [Local, Cluster]) loadBalancerSourceRanges A list of CIDR ranges (for example 10.0.0.0/8 or 130.211.204.1/32 ) from which clients can connect to load balancer type listeners. If supported by the platform, traffic through the loadbalancer is restricted to the specified CIDR ranges. This field is applicable only for loadbalancer type services and is ignored if the cloud provider does not support the feature. For more information, see https://v1-17.docs.kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/ . This field can be used only with loadbalancer type listener. string array bootstrap Bootstrap configuration. GenericKafkaListenerConfigurationBootstrap brokers Per-broker configurations. GenericKafkaListenerConfigurationBroker array ipFamilyPolicy Specifies the IP Family Policy used by the service. Available options are SingleStack , PreferDualStack and RequireDualStack . SingleStack is for a single IP family. PreferDualStack is for two IP families on dual-stack configured clusters or a single IP family on single-stack clusters. RequireDualStack fails unless there are two IP families on dual-stack configured clusters. If unspecified, OpenShift will choose the default value based on the service type. Available on OpenShift 1.20 and newer. string (one of [RequireDualStack, SingleStack, PreferDualStack]) ipFamilies Specifies the IP Families used by the service. Available options are IPv4 and IPv6. If unspecified, OpenShift will choose the default value based on the `ipFamilyPolicy setting. Available on OpenShift 1.20 and newer. string (one or more of [IPv6, IPv4]) array class Configures the Ingress class that defines which Ingress controller will be used. This field can be used only with ingress type listener. If not specified, the default Ingress controller will be used. string finalizers A list of finalizers which will be configured for the LoadBalancer type Services created for this listener. If supported by the platform, the finalizer service.kubernetes.io/load-balancer-cleanup to make sure that the external load balancer is deleted together with the service.For more information, see https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#garbage-collecting-load-balancers . This field can be used only with loadbalancer type listeners. string array maxConnectionCreationRate The maximum connection creation rate we allow in this listener at any time. New connections will be throttled if the limit is reached.Supported only on Kafka 2.7.0 and newer. integer maxConnections The maximum number of connections we allow for this listener in the broker at any time. New connections are blocked if the limit is reached. integer preferredNodePortAddressType Defines which address type should be used as the node address. Available types are: ExternalDNS , ExternalIP , InternalDNS , InternalIP and Hostname . By default, the addresses will be used in the following order (the first one found will be used): * ExternalDNS * ExternalIP * InternalDNS * InternalIP * Hostname This field is used to select the preferred address type, which is checked first. If no address is found for this address type, the other types are checked in the default order. This field can only be used with nodeport type listener. string (one of [ExternalDNS, ExternalIP, Hostname, InternalIP, InternalDNS]) useServiceDnsDomain Configures whether the OpenShift service DNS domain should be used or not. If set to true , the generated addresses will contain the service DNS domain suffix (by default .cluster.local , can be configured using environment variable KUBERNETES_SERVICE_DNS_DOMAIN ). Defaults to false .This field can be used only with internal type listener. boolean 13.2.11. CertAndKeySecretSource schema reference Used in: GenericKafkaListenerConfiguration , KafkaClientAuthenticationTls Property Description certificate The name of the file certificate in the Secret. string key The name of the private key in the Secret. string secretName The name of the Secret containing the certificate. string 13.2.12. GenericKafkaListenerConfigurationBootstrap schema reference Used in: GenericKafkaListenerConfiguration Full list of GenericKafkaListenerConfigurationBootstrap schema properties Broker service equivalents of nodePort , host , loadBalancerIP and annotations properties are configured in the GenericKafkaListenerConfigurationBroker schema . 13.2.12.1. alternativeNames You can specify alternative names for the bootstrap service. The names are added to the broker certificates and can be used for TLS hostname verification. The alternativeNames property is applicable to all types of listeners. Example of an external route listener configured with an additional bootstrap address listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: alternativeNames: - example.hostname1 - example.hostname2 # ... 13.2.12.2. host The host property is used with route and ingress listeners to specify the hostnames used by the bootstrap and per-broker services. A host property value is mandatory for ingress listener configuration, as the Ingress controller does not assign any hostnames automatically. Make sure that the hostnames resolve to the Ingress endpoints. AMQ Streams will not perform any validation that the requested hosts are available and properly routed to the Ingress endpoints. Example of host configuration for an ingress listener listeners: #... - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com # ... By default, route listener hosts are automatically assigned by OpenShift. However, you can override the assigned route hosts by specifying hosts. AMQ Streams does not perform any validation that the requested hosts are available. You must ensure that they are free and can be used. Example of host configuration for a route listener # ... listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myrouter.com brokers: - broker: 0 host: broker-0.myrouter.com - broker: 1 host: broker-1.myrouter.com - broker: 2 host: broker-2.myrouter.com # ... 13.2.12.3. nodePort By default, the port numbers used for the bootstrap and broker services are automatically assigned by OpenShift. You can override the assigned node ports for nodeport listeners by specifying the requested port numbers. AMQ Streams does not perform any validation on the requested ports. You must ensure that they are free and available for use. Example of an external listener configured with overrides for node ports # ... listeners: #... - name: external port: 9094 type: nodeport tls: true authentication: type: tls configuration: bootstrap: nodePort: 32100 brokers: - broker: 0 nodePort: 32000 - broker: 1 nodePort: 32001 - broker: 2 nodePort: 32002 # ... 13.2.12.4. loadBalancerIP Use the loadBalancerIP property to request a specific IP address when creating a loadbalancer. Use this property when you need to use a loadbalancer with a specific IP address. The loadBalancerIP field is ignored if the cloud provider does not support the feature. Example of an external listener of type loadbalancer with specific loadbalancer IP address requests # ... listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: loadBalancerIP: 172.29.3.10 brokers: - broker: 0 loadBalancerIP: 172.29.3.1 - broker: 1 loadBalancerIP: 172.29.3.2 - broker: 2 loadBalancerIP: 172.29.3.3 # ... 13.2.12.5. annotations Use the annotations property to add annotations to OpenShift resources related to the listeners. You can use these annotations, for example, to instrument DNS tooling such as External DNS , which automatically assigns DNS names to the loadbalancer services. Example of an external listener of type loadbalancer using annotations # ... listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" brokers: - broker: 0 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" - broker: 1 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" - broker: 2 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" # ... 13.2.12.6. GenericKafkaListenerConfigurationBootstrap schema properties Property Description alternativeNames Additional alternative names for the bootstrap service. The alternative names will be added to the list of subject alternative names of the TLS certificates. string array host The bootstrap host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with route (optional) or ingress (required) type listeners. string nodePort Node port for the bootstrap service. This field can be used only with nodeport type listener. integer loadBalancerIP The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.This field can be used only with loadbalancer type listener. string annotations Annotations that will be added to the Ingress , Route , or Service resource. You can use this field to configure DNS providers such as External DNS. This field can be used only with loadbalancer , nodeport , route , or ingress type listeners. map labels Labels that will be added to the Ingress , Route , or Service resource. This field can be used only with loadbalancer , nodeport , route , or ingress type listeners. map 13.2.13. GenericKafkaListenerConfigurationBroker schema reference Used in: GenericKafkaListenerConfiguration Full list of GenericKafkaListenerConfigurationBroker schema properties You can see example configuration for the nodePort , host , loadBalancerIP and annotations properties in the GenericKafkaListenerConfigurationBootstrap schema , which configures bootstrap service overrides. Advertised addresses for brokers By default, AMQ Streams tries to automatically determine the hostnames and ports that your Kafka cluster advertises to its clients. This is not sufficient in all situations, because the infrastructure on which AMQ Streams is running might not provide the right hostname or port through which Kafka can be accessed. You can specify a broker ID and customize the advertised hostname and port in the configuration property of the listener. AMQ Streams will then automatically configure the advertised address in the Kafka brokers and add it to the broker certificates so it can be used for TLS hostname verification. Overriding the advertised host and ports is available for all types of listeners. Example of an external route listener configured with overrides for advertised addresses listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342 # ... 13.2.13.1. GenericKafkaListenerConfigurationBroker schema properties Property Description broker ID of the kafka broker (broker identifier). Broker IDs start from 0 and correspond to the number of broker replicas. integer advertisedHost The host name which will be used in the brokers' advertised.brokers . string advertisedPort The port number which will be used in the brokers' advertised.brokers . integer host The broker host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with route (optional) or ingress (required) type listeners. string nodePort Node port for the per-broker service. This field can be used only with nodeport type listener. integer loadBalancerIP The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.This field can be used only with loadbalancer type listener. string annotations Annotations that will be added to the Ingress or Service resource. You can use this field to configure DNS providers such as External DNS. This field can be used only with loadbalancer , nodeport , or ingress type listeners. map labels Labels that will be added to the Ingress , Route , or Service resource. This field can be used only with loadbalancer , nodeport , route , or ingress type listeners. map 13.2.14. EphemeralStorage schema reference Used in: JbodStorage , KafkaClusterSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the EphemeralStorage type from PersistentClaimStorage . It must have the value ephemeral for the type EphemeralStorage . Property Description id Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. integer sizeLimit When type=ephemeral, defines the total amount of local storage required for this EmptyDir volume (for example 1Gi). string type Must be ephemeral . string 13.2.15. PersistentClaimStorage schema reference Used in: JbodStorage , KafkaClusterSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the PersistentClaimStorage type from EphemeralStorage . It must have the value persistent-claim for the type PersistentClaimStorage . Property Description type Must be persistent-claim . string size When type=persistent-claim, defines the size of the persistent volume claim (i.e 1Gi). Mandatory when type=persistent-claim. string selector Specifies a specific persistent volume to use. It contains key:value pairs representing labels for selecting such a volume. map deleteClaim Specifies if the persistent volume claim has to be deleted when the cluster is un-deployed. boolean class The storage class to use for dynamic volume allocation. string id Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. integer overrides Overrides for individual brokers. The overrides field allows to specify a different configuration for different brokers. PersistentClaimStorageOverride array 13.2.16. PersistentClaimStorageOverride schema reference Used in: PersistentClaimStorage Property Description class The storage class to use for dynamic volume allocation for this broker. string broker Id of the kafka broker (broker identifier). integer 13.2.17. JbodStorage schema reference Used in: KafkaClusterSpec The type property is a discriminator that distinguishes use of the JbodStorage type from EphemeralStorage , PersistentClaimStorage . It must have the value jbod for the type JbodStorage . Property Description type Must be jbod . string volumes List of volumes as Storage objects representing the JBOD disks array. EphemeralStorage , PersistentClaimStorage array 13.2.18. KafkaAuthorizationSimple schema reference Used in: KafkaClusterSpec Full list of KafkaAuthorizationSimple schema properties Simple authorization in AMQ Streams uses the AclAuthorizer plugin, the default Access Control Lists (ACLs) authorization plugin provided with Apache Kafka. ACLs allow you to define which users have access to which resources at a granular level. Configure the Kafka custom resource to use simple authorization. Set the type property in the authorization section to the value simple , and configure a list of super users. Access rules are configured for the KafkaUser , as described in the ACLRule schema reference . 13.2.18.1. superUsers A list of user principals treated as super users, so that they are always allowed without querying ACL rules. For more information see Kafka authorization . An example of simple authorization configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 # ... Note The super.user configuration option in the config property in Kafka.spec.kafka is ignored. Designate super users in the authorization property instead. For more information, see Kafka broker configuration . 13.2.18.2. KafkaAuthorizationSimple schema properties The type property is a discriminator that distinguishes use of the KafkaAuthorizationSimple type from KafkaAuthorizationOpa , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom . It must have the value simple for the type KafkaAuthorizationSimple . Property Description type Must be simple . string superUsers List of super users. Should contain list of user principals which should get unlimited access rights. string array 13.2.19. KafkaAuthorizationOpa schema reference Used in: KafkaClusterSpec Full list of KafkaAuthorizationOpa schema properties To use Open Policy Agent authorization, set the type property in the authorization section to the value opa , and configure OPA properties as required. 13.2.19.1. url The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. Required. 13.2.19.2. allowOnError Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable. Defaults to false - all actions will be denied. 13.2.19.3. initialCacheCapacity Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 5000 . 13.2.19.4. maximumCacheSize Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000 . 13.2.19.5. expireAfterMs The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000 milliseconds (1 hour). 13.2.19.6. superUsers A list of user principals treated as super users, so that they are always allowed without querying the open Policy Agent policy. For more information see Kafka authorization . An example of Open Policy Agent authorizer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: opa url: http://opa:8181/v1/data/kafka/allow allowOnError: false initialCacheCapacity: 1000 maximumCacheSize: 10000 expireAfterMs: 60000 superUsers: - CN=fred - sam - CN=edward # ... 13.2.19.7. KafkaAuthorizationOpa schema properties The type property is a discriminator that distinguishes use of the KafkaAuthorizationOpa type from KafkaAuthorizationSimple , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom . It must have the value opa for the type KafkaAuthorizationOpa . Property Description type Must be opa . string url The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. This option is required. string allowOnError Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable). Defaults to false - all actions will be denied. boolean initialCacheCapacity Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request Defaults to 5000 . integer maximumCacheSize Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000 . integer expireAfterMs The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000 . integer superUsers List of super users, which is specifically a list of user principals that have unlimited access rights. string array 13.2.20. KafkaAuthorizationKeycloak schema reference Used in: KafkaClusterSpec The type property is a discriminator that distinguishes use of the KafkaAuthorizationKeycloak type from KafkaAuthorizationSimple , KafkaAuthorizationOpa , KafkaAuthorizationCustom . It must have the value keycloak for the type KafkaAuthorizationKeycloak . Property Description type Must be keycloak . string clientId OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. string tokenEndpointUri Authorization server token endpoint URI. string tlsTrustedCertificates Trusted certificates for TLS connection to the OAuth server. CertSecretSource array disableTlsHostnameVerification Enable or disable TLS hostname verification. Default value is false . boolean delegateToKafkaAcls Whether authorization decision should be delegated to the 'Simple' authorizer if DENIED by Red Hat Single Sign-On Authorization Services policies. Default value is false . boolean grantsRefreshPeriodSeconds The time between two consecutive grants refresh runs in seconds. The default value is 60. integer grantsRefreshPoolSize The number of threads to use to refresh grants for active sessions. The more threads, the more parallelism, so the sooner the job completes. However, using more threads places a heavier load on the authorization server. The default value is 5. integer superUsers List of super users. Should contain list of user principals which should get unlimited access rights. string array 13.2.21. KafkaAuthorizationCustom schema reference Used in: KafkaClusterSpec Full list of KafkaAuthorizationCustom schema properties To use custom authorization in AMQ Streams, you can configure your own Authorizer plugin to define Access Control Lists (ACLs). ACLs allow you to define which users have access to which resources at a granular level. Configure the Kafka custom resource to use custom authorization. Set the type property in the authorization section to the value custom , and the set following properties. Important The custom authorizer must implement the org.apache.kafka.server.authorizer.Authorizer interface, and support configuration of super.users using the super.users configuration property. 13.2.21.1. authorizerClass (Required) Java class that implements the org.apache.kafka.server.authorizer.Authorizer interface to support custom ACLs. 13.2.21.2. superUsers A list of user principals treated as super users, so that they are always allowed without querying ACL rules. For more information see Kafka authorization . You can add configuration for initializing the custom authorizer using Kafka.spec.kafka.config . An example of custom authorization configuration under Kafka.spec apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: custom authorizerClass: io.mycompany.CustomAuthorizer superUsers: - CN=client_1 - user_2 - CN=client_3 # ... config: authorization.custom.property1=value1 authorization.custom.property2=value2 # ... In addition to the Kafka custom resource configuration, the JAR file containing the custom authorizer class along with its dependencies must be available on the classpath of the Kafka broker. The AMQ Streams Maven build process provides a mechanism to add custom third-party libraries to the generated Kafka broker container image by adding them as dependencies in the pom.xml file under the docker-images/kafka/kafka-thirdparty-libs directory. The directory contains different folders for different Kafka versions. Choose the appropriate folder. Before modifying the pom.xml file, the third-party library must be available in a Maven repository, and that Maven repository must be accessible to the AMQ Streams build process. Note The super.user configuration option in the config property in Kafka.spec.kafka is ignored. Designate super users in the authorization property instead. For more information, see Kafka broker configuration . 13.2.21.3. KafkaAuthorizationCustom schema properties The type property is a discriminator that distinguishes use of the KafkaAuthorizationCustom type from KafkaAuthorizationSimple , KafkaAuthorizationOpa , KafkaAuthorizationKeycloak . It must have the value custom for the type KafkaAuthorizationCustom . Property Description type Must be custom . string authorizerClass Authorization implementation class, which must be available in classpath. string superUsers List of super users, which are user principals with unlimited access rights. string array 13.2.22. Rack schema reference Used in: KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec Full list of Rack schema properties The rack option configures rack awareness. A rack can represent an availability zone, data center, or an actual rack in your data center. The rack is configured through a topologyKey . topologyKey identifies a label on OpenShift nodes that contains the name of the topology in its value. An example of such a label is topology.kubernetes.io/zone (or failure-domain.beta.kubernetes.io/zone on older OpenShift versions), which contains the name of the availability zone in which the OpenShift node runs. You can configure your Kafka cluster to be aware of the rack in which it runs, and enable additional features such as spreading partition replicas across different racks or consuming messages from the closest replicas. For more information about OpenShift node labels, see Well-Known Labels, Annotations and Taints . Consult your OpenShift administrator regarding the node label that represents the zone or rack into which the node is deployed. 13.2.22.1. Spreading partition replicas across racks When rack awareness is configured, AMQ Streams will set broker.rack configuration for each Kafka broker. The broker.rack configuration assigns a rack ID to each broker. When broker.rack is configured, Kafka brokers will spread partition replicas across as many different racks as possible. When replicas are spread across multiple racks, the probability that multiple replicas will fail at the same time is lower than if they would be in the same rack. Spreading replicas improves resiliency, and is important for availability and reliability. To enable rack awareness in Kafka, add the rack option to the .spec.kafka section of the Kafka custom resource as shown in the example below. Example rack configuration for Kafka apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone # ... Note The rack in which brokers are running can change in some cases when the pods are deleted or restarted. As a result, the replicas running in different racks might then share the same rack. Use Cruise Control and the KafkaRebalance resource with the RackAwareGoal to make sure that replicas remain distributed across different racks. When rack awareness is enabled in the Kafka custom resource, AMQ Streams will automatically add the OpenShift preferredDuringSchedulingIgnoredDuringExecution affinity rule to distribute the Kafka brokers across the different racks. However, the preferred rule does not guarantee that the brokers will be spread. Depending on your exact OpenShift and Kafka configurations, you should add additional affinity rules or configure topologySpreadConstraints for both ZooKeeper and Kafka to make sure the nodes are properly distributed accross as many racks as possible. For more information see Section 2.7, "Configuring pod scheduling" . 13.2.22.2. Consuming messages from the closest replicas Rack awareness can also be used in consumers to fetch data from the closest replica. This is useful for reducing the load on your network when a Kafka cluster spans multiple datacenters and can also reduce costs when running Kafka in public clouds. However, it can lead to increased latency. In order to be able to consume from the closest replica, rack awareness has to be configured in the Kafka cluster, and the RackAwareReplicaSelector has to be enabled. The replica selector plugin provides the logic that enables clients to consume from the nearest replica. The default implementation uses LeaderSelector to always select the leader replica for the client. Specify RackAwareReplicaSelector for the replica.selector.class to switch from the default implementation. Example rack configuration with enabled replica-aware selector apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone config: # ... replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector # ... In addition to the Kafka broker configuration, you also need to specify the client.rack option in your consumers. The client.rack option should specify the rack ID in which the consumer is running. RackAwareReplicaSelector associates matching broker.rack and client.rack IDs, to find the nearest replica and consume from it. If there are multiple replicas in the same rack, RackAwareReplicaSelector always selects the most up-to-date replica. If the rack ID is not specified, or if it cannot find a replica with the same rack ID, it will fall back to the leader replica. Figure 13.1. Example showing client consuming from replicas in the same availability zone Consuming messages from the closest replicas can be used also in Kafka Connect for sink connectors which are consuming messages. When deploying Kafka Connect using AMQ Streams, you can use the rack section in the KafkaConnect or KafkaConnectS2I custom resources to automatically configure the client.rack option. Example rack configuration for Kafka Connect apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect # ... spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone # ... Enabling rack awareness in the KafkaConnect or KafkaConnectS2I custom resource will not set any affinity rules, but you can also configure affinity or topologySpreadConstraints . For more information see Section 2.7, "Configuring pod scheduling" . 13.2.22.3. Rack schema properties Property Description topologyKey A key that matches labels assigned to the OpenShift cluster nodes. The value of the label is used to set the broker's broker.rack config and client.rack in Kafka Connect. string 13.2.23. Probe schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaExporterSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , TlsSidecar , ZookeeperClusterSpec Property Description failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. integer initialDelaySeconds The initial delay before first the health is first checked. Default to 15 seconds. Minimum value is 0. integer periodSeconds How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. integer successThreshold Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1. integer timeoutSeconds The timeout for each attempted health check. Default to 5 seconds. Minimum value is 1. integer 13.2.24. JvmOptions schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , ZookeeperClusterSpec Property Description -XX A map of -XX options to the JVM. map -Xms -Xms option to to the JVM. string -Xmx -Xmx option to to the JVM. string gcLoggingEnabled Specifies whether the Garbage Collection logging is enabled. The default is false. boolean javaSystemProperties A map of additional system properties which will be passed using the -D option to the JVM. SystemProperty array 13.2.25. SystemProperty schema reference Used in: JvmOptions Property Description name The system property name. string value The system property value. string 13.2.26. KafkaJmxOptions schema reference Used in: KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec Full list of KafkaJmxOptions schema properties Configures JMX connection options. JMX metrics are obtained from Kafka brokers, Kafka Connect, and MirrorMaker 2.0 by opening a JMX port on 9999. Use the jmxOptions property to configure a password-protected or an unprotected JMX port. Using password protection prevents unauthorized pods from accessing the port. You can then obtain metrics about the component. For example, for each Kafka broker you can obtain bytes-per-second usage data from clients, or the request rate of the network of the broker. To enable security for the JMX port, set the type parameter in the authentication field to password . Example password-protected JMX configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jmxOptions: authentication: type: "password" # ... zookeeper: # ... You can then deploy a pod into a cluster and obtain JMX metrics using the headless service by specifying which broker you want to address. For example, to get JMX metrics from broker 0 you specify: " CLUSTER-NAME -kafka-0. CLUSTER-NAME -kafka-brokers" CLUSTER-NAME -kafka-0 is name of the broker pod, and CLUSTER-NAME -kafka-brokers is the name of the headless service to return the IPs of the broker pods. If the JMX port is secured, you can get the username and password by referencing them from the JMX Secret in the deployment of your pod. For an unprotected JMX port, use an empty object {} to open the JMX port on the headless service. You deploy a pod and obtain metrics in the same way as for the protected port, but in this case any pod can read from the JMX port. Example open port JMX configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jmxOptions: {} # ... zookeeper: # ... Additional resources For more information on the Kafka component metrics exposed using JMX, see the Apache Kafka documentation . 13.2.26.1. KafkaJmxOptions schema properties Property Description authentication Authentication configuration for connecting to the JMX port. The type depends on the value of the authentication.type property within the given object, which must be one of [password]. KafkaJmxAuthenticationPassword 13.2.27. KafkaJmxAuthenticationPassword schema reference Used in: KafkaJmxOptions The type property is a discriminator that distinguishes use of the KafkaJmxAuthenticationPassword type from other subtypes which may be added in the future. It must have the value password for the type KafkaJmxAuthenticationPassword . Property Description type Must be password . string 13.2.28. JmxPrometheusExporterMetrics schema reference Used in: CruiseControlSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the JmxPrometheusExporterMetrics type from other subtypes which may be added in the future. It must have the value jmxPrometheusExporter for the type JmxPrometheusExporterMetrics . Property Description type Must be jmxPrometheusExporter . string valueFrom ConfigMap entry where the Prometheus JMX Exporter configuration is stored. For details of the structure of this configuration, see the JMX Exporter documentation . ExternalConfigurationReference 13.2.29. ExternalConfigurationReference schema reference Used in: ExternalLogging , JmxPrometheusExporterMetrics Property Description configMapKeyRef Reference to the key in the ConfigMap containing the configuration. For more information, see the external documentation for core/v1 configmapkeyselector . ConfigMapKeySelector 13.2.30. InlineLogging schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the InlineLogging type from ExternalLogging . It must have the value inline for the type InlineLogging . Property Description type Must be inline . string loggers A Map from logger name to logger level. map 13.2.31. ExternalLogging schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the ExternalLogging type from InlineLogging . It must have the value external for the type ExternalLogging . Property Description type Must be external . string valueFrom ConfigMap entry where the logging configuration is stored. ExternalConfigurationReference 13.2.32. KafkaClusterTemplate schema reference Used in: KafkaClusterSpec Property Description statefulset Template for Kafka StatefulSet . StatefulSetTemplate pod Template for Kafka Pods . PodTemplate bootstrapService Template for Kafka bootstrap Service . InternalServiceTemplate brokersService Template for Kafka broker Service . InternalServiceTemplate externalBootstrapService Template for Kafka external bootstrap Service . ExternalServiceTemplate perPodService Template for Kafka per-pod Services used for access from outside of OpenShift. ExternalServiceTemplate externalBootstrapRoute Template for Kafka external bootstrap Route . ResourceTemplate perPodRoute Template for Kafka per-pod Routes used for access from outside of OpenShift. ResourceTemplate externalBootstrapIngress Template for Kafka external bootstrap Ingress . ResourceTemplate perPodIngress Template for Kafka per-pod Ingress used for access from outside of OpenShift. ResourceTemplate persistentVolumeClaim Template for all Kafka PersistentVolumeClaims . ResourceTemplate podDisruptionBudget Template for Kafka PodDisruptionBudget . PodDisruptionBudgetTemplate kafkaContainer Template for the Kafka broker container. ContainerTemplate initContainer Template for the Kafka init container. ContainerTemplate clusterCaCert Template for Secret with Kafka Cluster certificate public key. ResourceTemplate serviceAccount Template for the Kafka service account. ResourceTemplate clusterRoleBinding Template for the Kafka ClusterRoleBinding. ResourceTemplate 13.2.33. StatefulSetTemplate schema reference Used in: KafkaClusterTemplate , ZookeeperClusterTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate podManagementPolicy PodManagementPolicy which will be used for this StatefulSet. Valid values are Parallel and OrderedReady . Defaults to Parallel . string (one of [OrderedReady, Parallel]) 13.2.34. MetadataTemplate schema reference Used in: DeploymentTemplate , ExternalServiceTemplate , InternalServiceTemplate , PodDisruptionBudgetTemplate , PodTemplate , ResourceTemplate , StatefulSetTemplate Full list of MetadataTemplate schema properties Labels and Annotations are used to identify and organize resources, and are configured in the metadata property. For example: # ... template: statefulset: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ... The labels and annotations fields can contain any labels or annotations that do not contain the reserved string strimzi.io . Labels and annotations containing strimzi.io are used internally by AMQ Streams and cannot be configured. 13.2.34.1. MetadataTemplate schema properties Property Description labels Labels added to the resource template. Can be applied to different resources such as StatefulSets , Deployments , Pods , and Services . map annotations Annotations added to the resource template. Can be applied to different resources such as StatefulSets , Deployments , Pods , and Services . map 13.2.35. PodTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Full list of PodTemplate schema properties Configures the template for Kafka pods. Example PodTemplate configuration # ... template: pod: metadata: labels: label1: value1 annotations: anno1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 # ... 13.2.35.1. hostAliases Use the hostAliases property to a specify a list of hosts and IP addresses, which are injected into the /etc/hosts file of the pod. This configuration is especially useful for Kafka Connect or MirrorMaker when a connection outside of the cluster is also requested by users. Example hostAliases configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect #... spec: # ... template: pod: hostAliases: - ip: "192.168.1.86" hostnames: - "my-host-1" - "my-host-2" #... 13.2.35.2. PodTemplate schema properties Property Description metadata Metadata applied to the resource. MetadataTemplate imagePullSecrets List of references to secrets in the same namespace to use for pulling any of the images used by this Pod. When the STRIMZI_IMAGE_PULL_SECRETS environment variable in Cluster Operator and the imagePullSecrets option are specified, only the imagePullSecrets variable is used and the STRIMZI_IMAGE_PULL_SECRETS variable is ignored. For more information, see the external documentation for core/v1 localobjectreference . LocalObjectReference array securityContext Configures pod-level security attributes and common container settings. For more information, see the external documentation for core/v1 podsecuritycontext . PodSecurityContext terminationGracePeriodSeconds The grace period is the duration in seconds after the processes running in the pod are sent a termination signal, and the time when the processes are forcibly halted with a kill signal. Set this value to longer than the expected cleanup time for your process. Value must be a non-negative integer. A zero value indicates delete immediately. You might need to increase the grace period for very large Kafka clusters, so that the Kafka brokers have enough time to transfer their work to another broker before they are terminated. Defaults to 30 seconds. integer affinity The pod's affinity rules. For more information, see the external documentation for core/v1 affinity . Affinity tolerations The pod's tolerations. For more information, see the external documentation for core/v1 toleration . Toleration array priorityClassName The name of the priority class used to assign priority to the pods. For more information about priority classes, see Pod Priority and Preemption . string schedulerName The name of the scheduler used to dispatch this Pod . If not specified, the default scheduler will be used. string hostAliases The pod's HostAliases. HostAliases is an optional list of hosts and IPs that will be injected into the Pod's hosts file if specified. For more information, see the external documentation for core/v1 HostAlias . HostAlias array enableServiceLinks Indicates whether information about services should be injected into Pod's environment variables. boolean topologySpreadConstraints The pod's topology spread constraints. For more information, see the external documentation for core/v1 topologyspreadconstraint . TopologySpreadConstraint array 13.2.36. InternalServiceTemplate schema reference Used in: CruiseControlTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , ZookeeperClusterTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate ipFamilyPolicy Specifies the IP Family Policy used by the service. Available options are SingleStack , PreferDualStack and RequireDualStack . SingleStack is for a single IP family. PreferDualStack is for two IP families on dual-stack configured clusters or a single IP family on single-stack clusters. RequireDualStack fails unless there are two IP families on dual-stack configured clusters. If unspecified, OpenShift will choose the default value based on the service type. Available on OpenShift 1.20 and newer. string (one of [RequireDualStack, SingleStack, PreferDualStack]) ipFamilies Specifies the IP Families used by the service. Available options are IPv4 and IPv6. If unspecified, OpenShift will choose the default value based on the `ipFamilyPolicy setting. Available on OpenShift 1.20 and newer. string (one or more of [IPv6, IPv4]) array 13.2.37. ExternalServiceTemplate schema reference Used in: KafkaClusterTemplate Full list of ExternalServiceTemplate schema properties When exposing Kafka outside of OpenShift using loadbalancers or node ports, you can use properties, in addition to labels and annotations, to customize how a Service is created. An example showing customized external services # ... template: externalBootstrapService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 perPodService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 # ... 13.2.37.1. ExternalServiceTemplate schema properties Property Description metadata Metadata applied to the resource. MetadataTemplate 13.2.38. ResourceTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate , KafkaUserTemplate , ZookeeperClusterTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate 13.2.39. PodDisruptionBudgetTemplate schema reference Used in: CruiseControlTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Full list of PodDisruptionBudgetTemplate schema properties AMQ Streams creates a PodDisruptionBudget for every new StatefulSet or Deployment . By default, pod disruption budgets only allow a single pod to be unavailable at a given time. You can increase the amount of unavailable pods allowed by changing the default value of the maxUnavailable property in the PodDisruptionBudget.spec resource. An example of PodDisruptionBudget template # ... template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1 # ... 13.2.39.1. PodDisruptionBudgetTemplate schema properties Property Description metadata Metadata to apply to the PodDistruptionBugetTemplate resource. MetadataTemplate maxUnavailable Maximum number of unavailable pods to allow automatic Pod eviction. A Pod eviction is allowed when the maxUnavailable number of pods or fewer are unavailable after the eviction. Setting this value to 0 prevents all voluntary evictions, so the pods must be evicted manually. Defaults to 1. integer 13.2.40. ContainerTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Full list of ContainerTemplate schema properties You can set custom security context and environment variables for a container. The environment variables are defined under the env property as a list of objects with name and value fields. The following example shows two custom environment variables and a custom security context set for the Kafka broker containers: # ... template: kafkaContainer: env: - name: EXAMPLE_ENV_1 value: example.env.one - name: EXAMPLE_ENV_2 value: example.env.two securityContext: runAsUser: 2000 # ... Environment variables prefixed with KAFKA_ are internal to AMQ Streams and should be avoided. If you set a custom environment variable that is already in use by AMQ Streams, it is ignored and a warning is recorded in the log. 13.2.40.1. ContainerTemplate schema properties Property Description env Environment variables which should be applied to the container. ContainerEnvVar array securityContext Security context for the container. For more information, see the external documentation for core/v1 securitycontext . SecurityContext 13.2.41. ContainerEnvVar schema reference Used in: ContainerTemplate Property Description name The environment variable key. string value The environment variable value. string 13.2.42. ZookeeperClusterSpec schema reference Used in: KafkaSpec Full list of ZookeeperClusterSpec schema properties Configures a ZooKeeper cluster. 13.2.42.1. config Use the config properties to configure ZooKeeper options as keys. Standard Apache ZooKeeper configuration may be provided, restricted to those properties not managed directly by AMQ Streams. Configuration options that cannot be configured relate to: Security (Encryption, Authentication, and Authorization) Listener configuration Configuration of data directories ZooKeeper cluster composition The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the ZooKeeper documentation with the exception of those managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: server. dataDir dataLogDir clientPort authProvider quorum.auth requireClientAuthScheme When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other supported options are passed to ZooKeeper. There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . Example ZooKeeper configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... zookeeper: # ... config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 1 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" # ... 13.2.42.2. logging ZooKeeper has a configurable logger: zookeeper.root.logger ZooKeeper uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... zookeeper: # ... logging: type: inline loggers: zookeeper.root.logger: "INFO" # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... zookeeper: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: zookeeper-log4j.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.42.3. ZookeeperClusterSpec schema properties Property Description replicas The number of pods in the cluster. integer image The docker image for the pods. string storage Storage configuration (disk). Cannot be updated. The type depends on the value of the storage.type property within the given object, which must be one of [ephemeral, persistent-claim]. EphemeralStorage , PersistentClaimStorage config The ZooKeeper broker config. Properties with the following prefixes cannot be set: server., dataDir, dataLogDir, clientPort, authProvider, quorum.auth, requireClientAuthScheme, snapshot.trust.empty, standaloneEnabled, reconfigEnabled, 4lw.commands.whitelist, secureClientPort, ssl., serverCnxnFactory, sslQuorum (with the exception of: ssl.protocol, ssl.quorum.protocol, ssl.enabledProtocols, ssl.quorum.enabledProtocols, ssl.ciphersuites, ssl.quorum.ciphersuites, ssl.hostnameVerification, ssl.quorum.hostnameVerification). map livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics logging Logging configuration for ZooKeeper. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging template Template for ZooKeeper cluster resources. The template allows users to specify how are the StatefulSet , Pods and Services generated. ZookeeperClusterTemplate 13.2.43. ZookeeperClusterTemplate schema reference Used in: ZookeeperClusterSpec Property Description statefulset Template for ZooKeeper StatefulSet . StatefulSetTemplate pod Template for ZooKeeper Pods . PodTemplate clientService Template for ZooKeeper client Service . InternalServiceTemplate nodesService Template for ZooKeeper nodes Service . InternalServiceTemplate persistentVolumeClaim Template for all ZooKeeper PersistentVolumeClaims . ResourceTemplate podDisruptionBudget Template for ZooKeeper PodDisruptionBudget . PodDisruptionBudgetTemplate zookeeperContainer Template for the ZooKeeper container. ContainerTemplate serviceAccount Template for the ZooKeeper service account. ResourceTemplate 13.2.44. EntityOperatorSpec schema reference Used in: KafkaSpec Property Description topicOperator Configuration of the Topic Operator. EntityTopicOperatorSpec userOperator Configuration of the User Operator. EntityUserOperatorSpec tlsSidecar TLS sidecar configuration. TlsSidecar template Template for Entity Operator resources. The template allows users to specify how is the Deployment and Pods generated. EntityOperatorTemplate 13.2.45. EntityTopicOperatorSpec schema reference Used in: EntityOperatorSpec Full list of EntityTopicOperatorSpec schema properties Configures the Topic Operator. 13.2.45.1. logging The Topic Operator has a configurable logger: rootLogger.level The Topic Operator uses the Apache log4j2 logger implementation. Use the logging property in the entityOperator.topicOperator field of the Kafka resource Kafka resource to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j2.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: topic-operator-log4j2.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.45.2. EntityTopicOperatorSpec schema properties Property Description watchedNamespace The namespace the Topic Operator should watch. string image The image to use for the Topic Operator. string reconciliationIntervalSeconds Interval between periodic reconciliations. integer zookeeperSessionTimeoutSeconds Timeout for the ZooKeeper session. integer startupProbe Pod startup checking. Probe livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements topicMetadataMaxAttempts The number of attempts at getting topic metadata. integer logging Logging configuration. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging jvmOptions JVM Options for pods. JvmOptions 13.2.46. EntityUserOperatorSpec schema reference Used in: EntityOperatorSpec Full list of EntityUserOperatorSpec schema properties Configures the User Operator. 13.2.46.1. logging The User Operator has a configurable logger: rootLogger.level The User Operator uses the Apache log4j2 logger implementation. Use the logging property in the entityOperator.userOperator field of the Kafka resource to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j2.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: user-operator-log4j2.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.46.2. EntityUserOperatorSpec schema properties Property Description watchedNamespace The namespace the User Operator should watch. string image The image to use for the User Operator. string reconciliationIntervalSeconds Interval between periodic reconciliations. integer zookeeperSessionTimeoutSeconds Timeout for the ZooKeeper session. integer secretPrefix The prefix that will be added to the KafkaUser name to be used as the Secret name. string livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements logging Logging configuration. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging jvmOptions JVM Options for pods. JvmOptions 13.2.47. TlsSidecar schema reference Used in: CruiseControlSpec , EntityOperatorSpec Full list of TlsSidecar schema properties Configures a TLS sidecar, which is a container that runs in a pod, but serves a supporting purpose. In AMQ Streams, the TLS sidecar uses TLS to encrypt and decrypt communication between components and ZooKeeper. The TLS sidecar is used in: Entity Operator Cruise Control The TLS sidecar is configured using the tlsSidecar property in: Kafka.spec.entityOperator Kafka.spec.cruiseControl The TLS sidecar supports the following additional options: image resources logLevel readinessProbe livenessProbe The resources property specifies the memory and CPU resources allocated for the TLS sidecar. The image property configures the container image which will be used. The readinessProbe and livenessProbe properties configure healthcheck probes for the TLS sidecar. The logLevel property specifies the logging level. The following logging levels are supported: emerg alert crit err warning notice info debug The default value is notice . Example TLS sidecar configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... entityOperator: # ... tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi # ... cruiseControl: # ... tlsSidecar: image: my-org/my-image:latest resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logLevel: debug readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... 13.2.47.1. TlsSidecar schema properties Property Description image The docker image for the container. string livenessProbe Pod liveness checking. Probe logLevel The log level for the TLS sidecar. Default value is notice . string (one of [emerg, debug, crit, err, alert, warning, notice, info]) readinessProbe Pod readiness checking. Probe resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements 13.2.48. EntityOperatorTemplate schema reference Used in: EntityOperatorSpec Property Description deployment Template for Entity Operator Deployment . ResourceTemplate pod Template for Entity Operator Pods . PodTemplate topicOperatorContainer Template for the Entity Topic Operator container. ContainerTemplate userOperatorContainer Template for the Entity User Operator container. ContainerTemplate tlsSidecarContainer Template for the Entity Operator TLS sidecar container. ContainerTemplate serviceAccount Template for the Entity Operator service account. ResourceTemplate 13.2.49. CertificateAuthority schema reference Used in: KafkaSpec Configuration of how TLS certificates are used within the cluster. This applies to certificates used for both internal communication within the cluster and to certificates used for client access via Kafka.spec.kafka.listeners.tls . Property Description generateCertificateAuthority If true then Certificate Authority certificates will be generated automatically. Otherwise the user will need to provide a Secret with the CA certificate. Default is true. boolean generateSecretOwnerReference If true , the Cluster and Client CA Secrets are configured with the ownerReference set to the Kafka resource. If the Kafka resource is deleted when true , the CA Secrets are also deleted. If false , the ownerReference is disabled. If the Kafka resource is deleted when false , the CA Secrets are retained and available for reuse. Default is true . boolean validityDays The number of days generated certificates should be valid for. The default is 365. integer renewalDays The number of days in the certificate renewal period. This is the number of days before the a certificate expires during which renewal actions may be performed. When generateCertificateAuthority is true, this will cause the generation of a new certificate. When generateCertificateAuthority is true, this will cause extra logging at WARN level about the pending certificate expiry. Default is 30. integer certificateExpirationPolicy How should CA certificate expiration be handled when generateCertificateAuthority=true . The default is for a new CA certificate to be generated reusing the existing private key. string (one of [replace-key, renew-certificate]) 13.2.50. CruiseControlSpec schema reference Used in: KafkaSpec Property Description image The docker image for the pods. string tlsSidecar TLS sidecar configuration. TlsSidecar resources CPU and memory resources to reserve for the Cruise Control container. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking for the Cruise Control container. Probe readinessProbe Pod readiness checking for the Cruise Control container. Probe jvmOptions JVM Options for the Cruise Control container. JvmOptions logging Logging configuration (Log4j 2) for Cruise Control. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging template Template to specify how Cruise Control resources, Deployments and Pods , are generated. CruiseControlTemplate brokerCapacity The Cruise Control brokerCapacity configuration. BrokerCapacity config The Cruise Control configuration. For a full list of configuration options refer to https://github.com/linkedin/cruise-control/wiki/Configurations . Note that properties with the following prefixes cannot be set: bootstrap.servers, client.id, zookeeper., network., security., failed.brokers.zk.path,webserver.http., webserver.api.urlprefix, webserver.session.path, webserver.accesslog., two.step., request.reason.required,metric.reporter.sampler.bootstrap.servers, metric.reporter.topic, partition.metric.sample.store.topic, broker.metric.sample.store.topic,capacity.config.file, self.healing., anomaly.detection., ssl. (with the exception of: ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, webserver.http.cors.enabled,webserver.http.cors.origin, webserver.http.cors.exposeheaders). map metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics 13.2.51. CruiseControlTemplate schema reference Used in: CruiseControlSpec Property Description deployment Template for Cruise Control Deployment . ResourceTemplate pod Template for Cruise Control Pods . PodTemplate apiService Template for Cruise Control API Service . InternalServiceTemplate podDisruptionBudget Template for Cruise Control PodDisruptionBudget . PodDisruptionBudgetTemplate cruiseControlContainer Template for the Cruise Control container. ContainerTemplate tlsSidecarContainer Template for the Cruise Control TLS sidecar container. ContainerTemplate serviceAccount Template for the Cruise Control service account. ResourceTemplate 13.2.52. BrokerCapacity schema reference Used in: CruiseControlSpec Property Description disk Broker capacity for disk in bytes, for example, 100Gi. string cpuUtilization Broker capacity for CPU resource utilization as a percentage (0 - 100). integer inboundNetwork Broker capacity for inbound network throughput in bytes per second, for example, 10000KB/s. string outboundNetwork Broker capacity for outbound network throughput in bytes per second, for example 10000KB/s. string 13.2.53. KafkaExporterSpec schema reference Used in: KafkaSpec Property Description image The docker image for the pods. string groupRegex Regular expression to specify which consumer groups to collect. Default value is .* . string topicRegex Regular expression to specify which topics to collect. Default value is .* . string resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements logging Only log messages with the given severity or above. Valid levels: [ debug , info , warn , error , fatal ]. Default log level is info . string enableSaramaLogging Enable Sarama logging, a Go client library used by the Kafka Exporter. boolean template Customization of deployment templates and pods. KafkaExporterTemplate livenessProbe Pod liveness check. Probe readinessProbe Pod readiness check. Probe 13.2.54. KafkaExporterTemplate schema reference Used in: KafkaExporterSpec Property Description deployment Template for Kafka Exporter Deployment . ResourceTemplate pod Template for Kafka Exporter Pods . PodTemplate service The service property has been deprecated. The Kafka Exporter service has been removed. Template for Kafka Exporter Service . ResourceTemplate container Template for the Kafka Exporter container. ContainerTemplate serviceAccount Template for the Kafka Exporter service account. ResourceTemplate 13.2.55. KafkaStatus schema reference Used in: Kafka Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer listeners Addresses of the internal and external listeners. ListenerStatus array clusterId Kafka cluster Id. string 13.2.56. Condition schema reference Used in: KafkaBridgeStatus , KafkaConnectorStatus , KafkaConnectS2IStatus , KafkaConnectStatus , KafkaMirrorMaker2Status , KafkaMirrorMakerStatus , KafkaRebalanceStatus , KafkaStatus , KafkaTopicStatus , KafkaUserStatus Property Description type The unique identifier of a condition, used to distinguish between other conditions in the resource. string status The status of the condition, either True, False or Unknown. string lastTransitionTime Last time the condition of a type changed from one status to another. The required format is 'yyyy-MM-ddTHH:mm:ssZ', in the UTC time zone. string reason The reason for the condition's last transition (a single word in CamelCase). string message Human-readable message indicating details about the condition's last transition. string 13.2.57. ListenerStatus schema reference Used in: KafkaStatus Property Description type The type of the listener. Can be one of the following three types: plain , tls , and external . string addresses A list of the addresses for this listener. ListenerAddress array bootstrapServers A comma-separated list of host:port pairs for connecting to the Kafka cluster using this listener. string certificates A list of TLS certificates which can be used to verify the identity of the server when connecting to the given listener. Set only for tls and external listeners. string array 13.2.58. ListenerAddress schema reference Used in: ListenerStatus Property Description host The DNS name or IP address of the Kafka bootstrap service. string port The port of the Kafka bootstrap service. integer 13.2.59. KafkaConnect schema reference Property Description spec The specification of the Kafka Connect cluster. KafkaConnectSpec status The status of the Kafka Connect cluster. KafkaConnectStatus 13.2.60. KafkaConnectSpec schema reference Used in: KafkaConnect Full list of KafkaConnectSpec schema properties Configures a Kafka Connect cluster. 13.2.60.1. config Use the config properties to configure Kafka options as keys. Standard Apache Kafka Connect configuration may be provided, restricted to those properties not managed directly by AMQ Streams. Configuration options that cannot be configured relate to: Kafka cluster bootstrap address Security (Encryption, Authentication, and Authorization) Listener / REST interface configuration Plugin path configuration The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the Apache Kafka documentation with the exception of those options that are managed directly by AMQ Streams. Specifically, configuration options with keys equal to or starting with one of the following strings are forbidden: ssl. sasl. security. listeners plugin.path rest. bootstrap.servers When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka Connect. Important The Cluster Operator does not validate keys or values in the config object provided. When an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this circumstance, fix the configuration in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config object, then the Cluster Operator can roll out the new configuration to all Kafka Connect nodes. Certain options have default values: group.id with default value connect-cluster offset.storage.topic with default value connect-cluster-offsets config.storage.topic with default value connect-cluster-configs status.storage.topic with default value connect-cluster-status key.converter with default value org.apache.kafka.connect.json.JsonConverter value.converter with default value org.apache.kafka.connect.json.JsonConverter These options are automatically configured in case they are not present in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config properties. There are exceptions to the forbidden options. You can use three allowed ssl configuration options for client connection using a specific cipher suite for a TLS version. A cipher suite combines algorithms for secure connection and data transfer. You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Example Kafka Connect configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS # ... For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. 13.2.60.2. logging Kafka Connect (and Kafka Connect with Source2Image support) has its own configurable loggers: connect.root.logger.level log4j.logger.org.reflections Further loggers are added depending on the Kafka Connect plugins running. Use a curl request to get a complete list of Kafka Connect loggers running from any Kafka broker pod: curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/ Kafka Connect uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # ... logging: type: inline loggers: connect.root.logger.level: "INFO" # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: connect-logging.log4j # ... Any available loggers that are not configured have their level set to OFF . If Kafka Connect was deployed using the Cluster Operator, changes to Kafka Connect logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.60.3. KafkaConnectSpec schema properties Property Description version The Kafka Connect version. Defaults to 2.8.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string replicas The number of pods in the Kafka Connect group. integer image The docker image for the pods. string bootstrapServers Bootstrap servers to connect to. This should be given as a comma separated list of <hostname> : <port> pairs. string tls TLS configuration. KafkaConnectTls authentication Authentication configuration for Kafka Connect. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map resources The maximum limits for CPU and memory resources and the requested initial resources. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions jmxOptions JMX Options. KafkaJmxOptions logging Logging configuration for Kafka Connect. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging tracing The configuration of tracing in Kafka Connect. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing template Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the Deployment , Pods and Service are generated. KafkaConnectTemplate externalConfiguration Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. ExternalConfiguration build Configures how the Connect container image should be built. Optional. Build clientRackInitImage The image of the init container used for initializing the client.rack . string metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics rack Configuration of the node label which will be used as the client.rack consumer configuration. Rack 13.2.61. KafkaConnectTls schema reference Used in: KafkaConnectS2ISpec , KafkaConnectSpec Full list of KafkaConnectTls schema properties Configures TLS trusted certificates for connecting Kafka Connect to the cluster. 13.2.61.1. trustedCertificates Provide a list of secrets using the trustedCertificates property . 13.2.61.2. KafkaConnectTls schema properties Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array 13.2.62. KafkaClientAuthenticationTls schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationTls schema properties To configure TLS client authentication, set the type property to the value tls . TLS client authentication uses a TLS certificate to authenticate. 13.2.62.1. certificateAndKey The certificate is specified in the certificateAndKey property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private. You can use the secrets created by the User Operator, or you can create your own TLS certificate file, with the keys used for authentication, then create a Secret from the file: oc create secret generic MY-SECRET \ --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt \ --from-file= MY-PRIVATE.key Note TLS client authentication can only be used with TLS connections. Example TLS client authentication configuration authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key 13.2.62.2. KafkaClientAuthenticationTls schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationTls type from KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth . It must have the value tls for the type KafkaClientAuthenticationTls . Property Description certificateAndKey Reference to the Secret which holds the certificate and private key pair. CertAndKeySecretSource type Must be tls . string 13.2.63. KafkaClientAuthenticationScramSha512 schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationScramSha512 schema properties To configure SASL-based SCRAM-SHA-512 authentication, set the type property to scram-sha-512 . The SCRAM-SHA-512 authentication mechanism requires a username and password. 13.2.63.1. username Specify the username in the username property. 13.2.63.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, you can create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for SCRAM-SHA-512 client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret , and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. Example SASL-based SCRAM-SHA-512 client authentication configuration for Kafka Connect authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field 13.2.63.3. KafkaClientAuthenticationScramSha512 schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationScramSha512 type from KafkaClientAuthenticationTls , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth . It must have the value scram-sha-512 for the type KafkaClientAuthenticationScramSha512 . Property Description passwordSecret Reference to the Secret which holds the password. PasswordSecretSource type Must be scram-sha-512 . string username Username used for the authentication. string 13.2.64. PasswordSecretSource schema reference Used in: KafkaClientAuthenticationPlain , KafkaClientAuthenticationScramSha512 Property Description password The name of the key in the Secret under which the password is stored. string secretName The name of the Secret containing the password. string 13.2.65. KafkaClientAuthenticationPlain schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationPlain schema properties To configure SASL-based PLAIN authentication, set the type property to plain . SASL PLAIN authentication mechanism requires a username and password. Warning The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled. 13.2.65.1. username Specify the username in the username property. 13.2.65.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for PLAIN client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. An example SASL based PLAIN client authentication configuration authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name 13.2.65.3. KafkaClientAuthenticationPlain schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationPlain type from KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationOAuth . It must have the value plain for the type KafkaClientAuthenticationPlain . Property Description passwordSecret Reference to the Secret which holds the password. PasswordSecretSource type Must be plain . string username Username used for the authentication. string 13.2.66. KafkaClientAuthenticationOAuth schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationOAuth schema properties To configure OAuth client authentication, set the type property to oauth . OAuth authentication can be configured using one of the following options: Client ID and secret Client ID and refresh token Access token TLS Client ID and secret You can configure the address of your authorization server in the tokenEndpointUri property together with the client ID and client secret used in authentication. The OAuth client will connect to the OAuth server, authenticate using the client ID and secret and get an access token which it will use to authenticate with the Kafka broker. In the clientSecret property, specify a link to a Secret containing the client secret. An example of OAuth client authentication using client ID and client secret authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret Optionally, scope and audience can be specified if needed. Client ID and refresh token You can configure the address of your OAuth server in the tokenEndpointUri property together with the OAuth client ID and refresh token. The OAuth client will connect to the OAuth server, authenticate using the client ID and refresh token and get an access token which it will use to authenticate with the Kafka broker. In the refreshToken property, specify a link to a Secret containing the refresh token. + .An example of OAuth client authentication using client ID and refresh token authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token Access token You can configure the access token used for authentication with the Kafka broker directly. In this case, you do not specify the tokenEndpointUri . In the accessToken property, specify a link to a Secret containing the access token. An example of OAuth client authentication using only an access token authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token TLS Accessing the OAuth server using the HTTPS protocol does not require any additional configuration as long as the TLS certificates used by it are signed by a trusted certification authority and its hostname is listed in the certificate. If your OAuth server is using certificates which are self-signed or are signed by a certification authority which is not trusted, you can configure a list of trusted certificates in the custom resource. The tlsTrustedCertificates property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format. An example of TLS certificates provided authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt The OAuth client will by default verify that the hostname of your OAuth server matches either the certificate subject or one of the alternative DNS names. If it is not required, you can disable the hostname verification. An example of disabled TLS hostname verification authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true 13.2.66.1. KafkaClientAuthenticationOAuth schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationOAuth type from KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain . It must have the value oauth for the type KafkaClientAuthenticationOAuth . Property Description accessToken Link to OpenShift Secret containing the access token which was obtained from the authorization server. GenericSecretSource accessTokenIsJwt Configure whether access token should be treated as JWT. This should be set to false if the authorization server returns opaque tokens. Defaults to true . boolean audience OAuth audience to use when authenticating against the authorization server. Some authorization servers require the audience to be explicitly set. The possible values depend on how the authorization server is configured. By default, audience is not specified when performing the token endpoint request. string clientId OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. string clientSecret Link to OpenShift Secret containing the OAuth client secret which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. GenericSecretSource disableTlsHostnameVerification Enable or disable TLS hostname verification. Default value is false . boolean maxTokenExpirySeconds Set or limit time-to-live of the access tokens to the specified number of seconds. This should be set if the authorization server returns opaque tokens. integer refreshToken Link to OpenShift Secret containing the refresh token which can be used to obtain access token from the authorization server. GenericSecretSource scope OAuth scope to use when authenticating against the authorization server. Some authorization servers require this to be set. The possible values depend on how authorization server is configured. By default scope is not specified when doing the token endpoint request. string tlsTrustedCertificates Trusted certificates for TLS connection to the OAuth server. CertSecretSource array tokenEndpointUri Authorization server token endpoint URI. string type Must be oauth . string 13.2.67. JaegerTracing schema reference Used in: KafkaBridgeSpec , KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec The type property is a discriminator that distinguishes use of the JaegerTracing type from other subtypes which may be added in the future. It must have the value jaeger for the type JaegerTracing . Property Description type Must be jaeger . string 13.2.68. KafkaConnectTemplate schema reference Used in: KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec Property Description deployment Template for Kafka Connect Deployment . DeploymentTemplate pod Template for Kafka Connect Pods . PodTemplate apiService Template for Kafka Connect API Service . InternalServiceTemplate connectContainer Template for the Kafka Connect container. ContainerTemplate initContainer Template for the Kafka init container. ContainerTemplate podDisruptionBudget Template for Kafka Connect PodDisruptionBudget . PodDisruptionBudgetTemplate serviceAccount Template for the Kafka Connect service account. ResourceTemplate clusterRoleBinding Template for the Kafka Connect ClusterRoleBinding. ResourceTemplate buildPod Template for Kafka Connect Build Pods . The build pod is used only on OpenShift. PodTemplate buildContainer Template for the Kafka Connect Build container. The build container is used only on OpenShift. ContainerTemplate buildConfig Template for the Kafka Connect BuildConfig used to build new container images. The BuildConfig is used only on OpenShift. ResourceTemplate buildServiceAccount Template for the Kafka Connect Build service account. ResourceTemplate 13.2.69. DeploymentTemplate schema reference Used in: KafkaBridgeTemplate , KafkaConnectTemplate , KafkaMirrorMakerTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate deploymentStrategy DeploymentStrategy which will be used for this Deployment. Valid values are RollingUpdate and Recreate . Defaults to RollingUpdate . string (one of [RollingUpdate, Recreate]) 13.2.70. ExternalConfiguration schema reference Used in: KafkaConnectS2ISpec , KafkaConnectSpec , KafkaMirrorMaker2Spec Full list of ExternalConfiguration schema properties Configures external storage properties that define configuration options for Kafka Connect connectors. You can mount ConfigMaps or Secrets into a Kafka Connect pod as environment variables or volumes. Volumes and environment variables are configured in the externalConfiguration property in KafkaConnect.spec and KafkaConnectS2I.spec . When applied, the environment variables and volumes are available for use when developing your connectors. 13.2.70.1. env Use the env property to specify one or more environment variables. These variables can contain a value from either a ConfigMap or a Secret. Example Secret containing values for environment variables apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE= Note The names of user-defined environment variables cannot start with KAFKA_ or STRIMZI_ . To mount a value from a Secret to an environment variable, use the valueFrom property and the secretKeyRef . Example environment variables set to values from a Secret apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey A common use case for mounting Secrets is for a connector to communicate with Amazon AWS. The connector needs to be able to read the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . To mount a value from a ConfigMap to an environment variable, use configMapKeyRef in the valueFrom property as shown in the following example. Example environment variables set to values from a ConfigMap apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key 13.2.70.2. volumes Use volumes to mount ConfigMaps or Secrets to a Kafka Connect pod. Using volumes instead of environment variables is useful in the following scenarios: Mounting a properties file that is used to configure Kafka Connect connectors Mounting truststores or keystores with TLS certificates Volumes are mounted inside the Kafka Connect containers on the path /opt/kafka/external-configuration/ <volume-name> . For example, the files from a volume named connector-config will appear in the directory /opt/kafka/external-configuration/connector-config . Configuration providers load values from outside the configuration. Use a provider mechanism to avoid passing restricted information over the Kafka Connect REST interface. FileConfigProvider loads configuration values from properties in a file. DirectoryConfigProvider loads configuration values from separate files within a directory structure. Use a comma-separated list if you want to add more than one provider, including custom providers. You can use custom providers to load values from other file locations. Using FileConfigProvider to load property values In this example, a Secret named mysecret contains connector properties that specify a database name and password: Example Secret with database properties apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password 1 The connector configuration in properties file format. 2 Database username and password properties used in the configuration. The Secret and the FileConfigProvider configuration provider are specified in the Kafka Connect configuration. The Secret is mounted to a volume named connector-config . FileConfigProvider is given the alias file . Example external volumes set to values from a Secret apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 #... externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4 1 The alias for the configuration provider is used to define other configuration parameters. 2 FileConfigProvider provides values from properties files. The parameter uses the alias from config.providers , taking the form config.providers.USD{alias}.class . 3 The name of the volume containing the Secret. Each volume must specify a name in the name property and a reference to a ConfigMap or Secret. 4 The name of the Secret. Placeholders for the property values in the Secret are referenced in the connector configuration. The placeholder structure is file: PATH-AND-FILE-NAME : PROPERTY . FileConfigProvider reads and extracts the database username and password property values from the mounted Secret in connector configurations. Example connector configuration showing placeholders for external values apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: "3306" database.user: "USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbUsername}" database.password: "USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbPassword}" database.server.id: "184054" #... Using DirectoryConfigProvider to load property values from separate files In this example, a Secret contains TLS truststore and keystore user credentials in separate files. Example Secret with user credentials apiVersion: v1 kind: Secret metadata: name: mysecret labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: 1 ca.crt: # Public key of the client CA user.crt: # User certificate that contains the public key of the user user.key: # Private key of the user user.p12: # PKCS #12 archive file for storing certificates and keys user.password: # Password for protecting the PKCS #12 archive file The Secret and the DirectoryConfigProvider configuration provider are specified in the Kafka Connect configuration. The Secret is mounted to a volume named connector-config . DirectoryConfigProvider is given the alias directory . Example external volumes set for user credentials files apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: directory config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 1 #... externalConfiguration: volumes: - name: connector-config secret: secretName: mysecret 1 1 The DirectoryConfigProvider provides values from files in a directory. The parameter uses the alias from config.providers , taking the form config.providers.USD{alias}.class . Placeholders for the credentials are referenced in the connector configuration. The placeholder structure is directory: PATH : FILE-NAME . DirectoryConfigProvider reads and extracts the credentials from the mounted Secret in connector configurations. Example connector configuration showing placeholders for external values apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: security.protocol: SSL ssl.truststore.type: PEM ssl.truststore.location: "USD{directory:/opt/kafka/external-configuration/connector-config:ca.crt}" ssl.keystore.type: PEM ssl.keystore.location: USD{directory:/opt/kafka/external-configuration/connector-config:user.key}" #... 13.2.70.3. ExternalConfiguration schema properties Property Description env Makes data from a Secret or ConfigMap available in the Kafka Connect pods as environment variables. ExternalConfigurationEnv array volumes Makes data from a Secret or ConfigMap available in the Kafka Connect pods as volumes. ExternalConfigurationVolumeSource array 13.2.71. ExternalConfigurationEnv schema reference Used in: ExternalConfiguration Property Description name Name of the environment variable which will be passed to the Kafka Connect pods. The name of the environment variable cannot start with KAFKA_ or STRIMZI_ . string valueFrom Value of the environment variable which will be passed to the Kafka Connect pods. It can be passed either as a reference to Secret or ConfigMap field. The field has to specify exactly one Secret or ConfigMap. ExternalConfigurationEnvVarSource 13.2.72. ExternalConfigurationEnvVarSource schema reference Used in: ExternalConfigurationEnv Property Description configMapKeyRef Reference to a key in a ConfigMap. For more information, see the external documentation for core/v1 configmapkeyselector . ConfigMapKeySelector secretKeyRef Reference to a key in a Secret. For more information, see the external documentation for core/v1 secretkeyselector . SecretKeySelector 13.2.73. ExternalConfigurationVolumeSource schema reference Used in: ExternalConfiguration Property Description configMap Reference to a key in a ConfigMap. Exactly one Secret or ConfigMap has to be specified. For more information, see the external documentation for core/v1 configmapvolumesource . ConfigMapVolumeSource name Name of the volume which will be added to the Kafka Connect pods. string secret Reference to a key in a Secret. Exactly one Secret or ConfigMap has to be specified. For more information, see the external documentation for core/v1 secretvolumesource . SecretVolumeSource 13.2.74. Build schema reference Used in: KafkaConnectS2ISpec , KafkaConnectSpec Full list of Build schema properties Configures additional connectors for Kafka Connect deployments. 13.2.74.1. output To build new container images with additional connector plugins, AMQ Streams requires a container registry where the images can be pushed to, stored, and pulled from. AMQ Streams does not run its own container registry, so a registry must be provided. AMQ Streams supports private container registries as well as public registries such as Quay or Docker Hub . The container registry is configured in the .spec.build.output section of the KafkaConnect custom resource. The output configuration, which is required, supports two types: docker and imagestream . Using Docker registry To use a Docker registry, you have to specify the type as docker , and the image field with the full name of the new container image. The full name must include: The address of the registry Port number (if listening on a non-standard port) The tag of the new container image Example valid container image names: docker.io/my-org/my-image/my-tag quay.io/my-org/my-image/my-tag image-registry.image-registry.svc:5000/myproject/kafka-connect-build:latest Each Kafka Connect deployment must use a separate image, which can mean different tags at the most basic level. If the registry requires authentication, use the pushSecret to set a name of the Secret with the registry credentials. For the Secret, use the kubernetes.io/dockerconfigjson type and a .dockerconfigjson file to contain the Docker credentials. For more information on pulling an image from a private registry, see Create a Secret based on existing Docker credentials . Example output configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: type: docker 1 image: my-registry.io/my-org/my-connect-cluster:latest 2 pushSecret: my-registry-credentials 3 #... 1 (Required) Type of output used by AMQ Streams. 2 (Required) Full name of the image used, including the repository and tag. 3 (Optional) Name of the secret with the container registry credentials. Using OpenShift ImageStream Instead of Docker, you can use OpenShift ImageStream to store a new container image. The ImageStream has to be created manually before deploying Kafka Connect. To use ImageStream, set the type to imagestream , and use the image property to specify the name of the ImageStream and the tag used. For example, my-connect-image-stream:latest . Example output configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: type: imagestream 1 image: my-connect-build:latest 2 #... 1 (Required) Type of output used by AMQ Streams. 2 (Required) Name of the ImageStream and tag. 13.2.74.2. plugins Connector plugins are a set of files that define the implementation required to connect to certain types of external system. The connector plugins required for a container image must be configured using the .spec.build.plugins property of the KafkaConnect custom resource. Each connector plugin must have a name which is unique within the Kafka Connect deployment. Additionally, the plugin artifacts must be listed. These artifacts are downloaded by AMQ Streams, added to the new container image, and used in the Kafka Connect deployment. The connector plugin artifacts can also include additional components, such as (de)serializers. Each connector plugin is downloaded into a separate directory so that the different connectors and their dependencies are properly sandboxed . Each plugin must be configured with at least one artifact . Example plugins configuration with two connector plugins apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: 1 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.3.1.Final/debezium-connector-postgres-1.3.1.Final-plugin.tar.gz sha512sum: 962a12151bdf9a5a30627eebac739955a4fd95a08d373b86bdcea2b4d0c27dd6e1edd5cb548045e115e33a9e69b1b2a352bee24df035a0447cb820077af00c03 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.7.0/camel-telegram-kafka-connector-0.7.0-package.tar.gz sha512sum: a9b1ac63e3284bea7836d7d24d84208c49cdf5600070e6bd1535de654f6920b74ad950d51733e8020bf4187870699819f54ef5859c7846ee4081507f48873479 #... 1 (Required) List of connector plugins and their artifacts. AMQ Streams supports the following types of artifacts: * JAR files, which are downloaded and used directly * TGZ archives, which are downloaded and unpacked * Other artifacts, which are downloaded and used directly Important AMQ Streams does not perform any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually, and configure the checksum verification to make sure the same artifact is used in the automated build and in the Kafka Connect deployment. Using JAR artifacts JAR artifacts represent a JAR file that is downloaded and added to a container image. To use a JAR artifacts, set the type property to jar , and specify the download location using the url property. Additionally, you can specify a SHA-512 checksum of the artifact. If specified, AMQ Streams will verify the checksum of the artifact while building the new container image. Example JAR artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: jar 1 url: https://my-domain.tld/my-jar.jar 2 sha512sum: 589...ab4 3 - type: jar url: https://my-domain.tld/my-jar2.jar #... 1 (Required) Type of artifact. 2 (Required) URL from which the artifact is downloaded. 3 (Optional) SHA-512 checksum to verify the artifact. Using TGZ artifacts TGZ artifacts are used to download TAR archives that have been compressed using Gzip compression. The TGZ artifact can contain the whole Kafka Connect connector, even when comprising multiple different files. The TGZ artifact is automatically downloaded and unpacked by AMQ Streams while building the new container image. To use TGZ artifacts, set the type property to tgz , and specify the download location using the url property. Additionally, you can specify a SHA-512 checksum of the artifact. If specified, AMQ Streams will verify the checksum before unpacking it and building the new container image. Example TGZ artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: tgz 1 url: https://my-domain.tld/my-connector-archive.jar 2 sha512sum: 158...jg10 3 #... 1 (Required) Type of artifact. 2 (Required) URL from which the archive is downloaded. 3 (Optional) SHA-512 checksum to verify the artifact. Using other artifacts other artifacts represent any kind of file that is downloaded and added to a container image. If you want to use a specific name for the artifact in the resulting container image, use the fileName field. If a file name is not specified, the file is named based on the URL hash. Additionally, you can specify a SHA-512 checksum of the artifact. If specified, AMQ Streams will verify the checksum of the artifact while building the new container image. Example other artifact apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... build: output: #... plugins: - name: my-plugin artifacts: - type: other 1 url: https://my-domain.tld/my-other-file.ext 2 sha512sum: 589...ab4 3 fileName: name-the-file.ext 4 #... 1 (Required) Type of artifact. 2 (Required) URL from which the artifact is downloaded. 3 (Optional) SHA-512 checksum to verify the artifact. 4 (Optional) The name under which the file is stored in the resulting container image. 13.2.74.3. Build schema properties Property Description output Configures where should the newly built image be stored. Required. The type depends on the value of the output.type property within the given object, which must be one of [docker, imagestream]. DockerOutput , ImageStreamOutput resources CPU and memory resources to reserve for the build. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements plugins List of connector plugins which should be added to the Kafka Connect. Required. Plugin array 13.2.75. DockerOutput schema reference Used in: Build The type property is a discriminator that distinguishes use of the DockerOutput type from ImageStreamOutput . It must have the value docker for the type DockerOutput . Property Description image The full name which should be used for tagging and pushing the newly built image. For example quay.io/my-organization/my-custom-connect:latest . Required. string pushSecret Container Registry Secret with the credentials for pushing the newly built image. string additionalKanikoOptions Configures additional options which will be passed to the Kaniko executor when building the new Connect image. Allowed options are: --customPlatform, --insecure, --insecure-pull, --insecure-registry, --log-format, --log-timestamp, --registry-mirror, --reproducible, --single-snapshot, --skip-tls-verify, --skip-tls-verify-pull, --skip-tls-verify-registry, --verbosity, --snapshotMode, --use-new-run. These options will be used only on OpenShift where the Kaniko executor is used. They will be ignored on OpenShift. The options are described in the Kaniko GitHub repository . Changing this field does not trigger new build of the Kafka Connect image. string array type Must be docker . string 13.2.76. ImageStreamOutput schema reference Used in: Build The type property is a discriminator that distinguishes use of the ImageStreamOutput type from DockerOutput . It must have the value imagestream for the type ImageStreamOutput . Property Description image The name and tag of the ImageStream where the newly built image will be pushed. For example my-custom-connect:latest . Required. string type Must be imagestream . string 13.2.77. Plugin schema reference Used in: Build Property Description name The unique name of the connector plugin. Will be used to generate the path where the connector artifacts will be stored. The name has to be unique within the KafkaConnect resource. The name has to follow the following pattern: ^[a-z][-_a-z0-9]*[a-z]USD . Required. string artifacts List of artifacts which belong to this connector plugin. Required. JarArtifact , TgzArtifact , ZipArtifact , OtherArtifact array 13.2.78. JarArtifact schema reference Used in: Plugin Property Description url URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required. string sha512sum SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. string type Must be jar . string 13.2.79. TgzArtifact schema reference Used in: Plugin Property Description url URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required. string sha512sum SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. string type Must be tgz . string 13.2.80. ZipArtifact schema reference Used in: Plugin Property Description url URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required. string sha512sum SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. string type Must be zip . string 13.2.81. OtherArtifact schema reference Used in: Plugin Property Description url URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required. string sha512sum SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. string fileName Name under which the artifact will be stored. string type Must be other . string 13.2.82. KafkaConnectStatus schema reference Used in: KafkaConnect Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. string connectorPlugins The list of connector plugins available in this Kafka Connect deployment. ConnectorPlugin array labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer 13.2.83. ConnectorPlugin schema reference Used in: KafkaConnectS2IStatus , KafkaConnectStatus , KafkaMirrorMaker2Status Property Description type The type of the connector plugin. The available types are sink and source . string version The version of the connector plugin. string class The class of the connector plugin. string 13.2.84. KafkaConnectS2I schema reference The type KafkaConnectS2I has been deprecated. Please use Build instead. Property Description spec The specification of the Kafka Connect Source-to-Image (S2I) cluster. KafkaConnectS2ISpec status The status of the Kafka Connect Source-to-Image (S2I) cluster. KafkaConnectS2IStatus 13.2.85. KafkaConnectS2ISpec schema reference Used in: KafkaConnectS2I Full list of KafkaConnectS2ISpec schema properties Configures a Kafka Connect cluster with Source-to-Image (S2I) support. When extending Kafka Connect with connector plugins on OpenShift (only), you can use OpenShift builds and S2I to create a container image that is used by the Kafka Connect deployment. The configuration options are similar to Kafka Connect configuration using the KafkaConnectSpec schema . 13.2.85.1. KafkaConnectS2ISpec schema properties Property Description version The Kafka Connect version. Defaults to 2.8.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string replicas The number of pods in the Kafka Connect group. integer image The docker image for the pods. string buildResources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements bootstrapServers Bootstrap servers to connect to. This should be given as a comma separated list of <hostname> : <port> pairs. string tls TLS configuration. KafkaConnectTls authentication Authentication configuration for Kafka Connect. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map resources The maximum limits for CPU and memory resources and the requested initial resources. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions jmxOptions JMX Options. KafkaJmxOptions logging Logging configuration for Kafka Connect. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging tracing The configuration of tracing in Kafka Connect. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing template Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the Deployment , Pods and Service are generated. KafkaConnectTemplate externalConfiguration Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. ExternalConfiguration build Configures how the Connect container image should be built. Optional. Build clientRackInitImage The image of the init container used for initializing the client.rack . string insecureSourceRepository When true this configures the source repository with the 'Local' reference policy and an import policy that accepts insecure source tags. boolean metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics rack Configuration of the node label which will be used as the client.rack consumer configuration. Rack 13.2.86. KafkaConnectS2IStatus schema reference Used in: KafkaConnectS2I Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. string connectorPlugins The list of connector plugins available in this Kafka Connect deployment. ConnectorPlugin array buildConfigName The name of the build configuration. string labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer 13.2.87. KafkaTopic schema reference Property Description spec The specification of the topic. KafkaTopicSpec status The status of the topic. KafkaTopicStatus 13.2.88. KafkaTopicSpec schema reference Used in: KafkaTopic Property Description partitions The number of partitions the topic should have. This cannot be decreased after topic creation. It can be increased after topic creation, but it is important to understand the consequences that has, especially for topics with semantic partitioning. When absent this will default to the broker configuration for num.partitions . integer replicas The number of replicas the topic should have. When absent this will default to the broker configuration for default.replication.factor . integer config The topic configuration. map topicName The name of the topic. When absent this will default to the metadata.name of the topic. It is recommended to not set this unless the topic name is not a valid OpenShift resource name. string 13.2.89. KafkaTopicStatus schema reference Used in: KafkaTopic Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer topicName Topic name. string 13.2.90. KafkaUser schema reference Property Description spec The specification of the user. KafkaUserSpec status The status of the Kafka User. KafkaUserStatus 13.2.91. KafkaUserSpec schema reference Used in: KafkaUser Property Description authentication Authentication mechanism enabled for this Kafka user. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512]. KafkaUserTlsClientAuthentication , KafkaUserScramSha512ClientAuthentication authorization Authorization rules for this Kafka user. The type depends on the value of the authorization.type property within the given object, which must be one of [simple]. KafkaUserAuthorizationSimple quotas Quotas on requests to control the broker resources used by clients. Network bandwidth and request rate quotas can be enforced.Kafka documentation for Kafka User quotas can be found at http://kafka.apache.org/documentation/#design_quotas . KafkaUserQuotas template Template to specify how Kafka User Secrets are generated. KafkaUserTemplate 13.2.92. KafkaUserTlsClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserTlsClientAuthentication type from KafkaUserScramSha512ClientAuthentication . It must have the value tls for the type KafkaUserTlsClientAuthentication . Property Description type Must be tls . string 13.2.93. KafkaUserScramSha512ClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserScramSha512ClientAuthentication type from KafkaUserTlsClientAuthentication . It must have the value scram-sha-512 for the type KafkaUserScramSha512ClientAuthentication . Property Description type Must be scram-sha-512 . string 13.2.94. KafkaUserAuthorizationSimple schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserAuthorizationSimple type from other subtypes which may be added in the future. It must have the value simple for the type KafkaUserAuthorizationSimple . Property Description type Must be simple . string acls List of ACL rules which should be applied to this user. AclRule array 13.2.95. AclRule schema reference Used in: KafkaUserAuthorizationSimple Full list of AclRule schema properties Configures access control rule for a KafkaUser when brokers are using the AclAuthorizer . Example KafkaUser configuration with authorization apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: prefix operation: Read 13.2.95.1. resource Use the resource property to specify the resource that the rule applies to. Simple authorization supports four resource types, which are specified in the type property: Topics ( topic ) Consumer Groups ( group ) Clusters ( cluster ) Transactional IDs ( transactionalId ) For Topic, Group, and Transactional ID resources you can specify the name of the resource the rule applies to in the name property. Cluster type resources have no name. A name is specified as a literal or a prefix using the patternType property. Literal names are taken exactly as they are specified in the name field. Prefix names use the value from the name as a prefix, and will apply the rule to all resources with names starting with the value. 13.2.95.2. type The type of rule, which is to allow or deny (not currently supported) an operation. The type field is optional. If type is unspecified, the ACL rule is treated as an allow rule. 13.2.95.3. operation Specify an operation for the rule to allow or deny. The following operations are supported: Read Write Delete Alter Describe All IdempotentWrite ClusterAction Create AlterConfigs DescribeConfigs Only certain operations work with each resource. For more details about AclAuthorizer , ACLs and supported combinations of resources and operations, see Authorization and ACLs . 13.2.95.4. host Use the host property to specify a remote host from which the rule is allowed or denied. Use an asterisk ( * ) to allow or deny the operation from all hosts. The host field is optional. If host is unspecified, the * value is used by default. 13.2.95.5. AclRule schema properties Property Description host The host from which the action described in the ACL rule is allowed or denied. string operation Operation which will be allowed or denied. Supported operations are: Read, Write, Create, Delete, Alter, Describe, ClusterAction, AlterConfigs, DescribeConfigs, IdempotentWrite and All. string (one of [Read, Write, Delete, Alter, Describe, All, IdempotentWrite, ClusterAction, Create, AlterConfigs, DescribeConfigs]) resource Indicates the resource for which given ACL rule applies. The type depends on the value of the resource.type property within the given object, which must be one of [topic, group, cluster, transactionalId]. AclRuleTopicResource , AclRuleGroupResource , AclRuleClusterResource , AclRuleTransactionalIdResource type The type of the rule. Currently the only supported type is allow . ACL rules with type allow are used to allow user to execute the specified operations. Default value is allow . string (one of [allow, deny]) 13.2.96. AclRuleTopicResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleTopicResource type from AclRuleGroupResource , AclRuleClusterResource , AclRuleTransactionalIdResource . It must have the value topic for the type AclRuleTopicResource . Property Description type Must be topic . string name Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. string patternType Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . string (one of [prefix, literal]) 13.2.97. AclRuleGroupResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleGroupResource type from AclRuleTopicResource , AclRuleClusterResource , AclRuleTransactionalIdResource . It must have the value group for the type AclRuleGroupResource . Property Description type Must be group . string name Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. string patternType Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . string (one of [prefix, literal]) 13.2.98. AclRuleClusterResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleClusterResource type from AclRuleTopicResource , AclRuleGroupResource , AclRuleTransactionalIdResource . It must have the value cluster for the type AclRuleClusterResource . Property Description type Must be cluster . string 13.2.99. AclRuleTransactionalIdResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleTransactionalIdResource type from AclRuleTopicResource , AclRuleGroupResource , AclRuleClusterResource . It must have the value transactionalId for the type AclRuleTransactionalIdResource . Property Description type Must be transactionalId . string name Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. string patternType Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . string (one of [prefix, literal]) 13.2.100. KafkaUserQuotas schema reference Used in: KafkaUserSpec Full list of KafkaUserQuotas schema properties Kafka allows a user to set quotas to control the use of resources by clients. 13.2.100.1. quotas You can configure your clients to use the following types of quotas: Network usage quotas specify the byte rate threshold for each group of clients sharing a quota. CPU utilization quotas specify a window for broker requests from clients. The window is the percentage of time for clients to make requests. A client makes requests on the I/O threads and network threads of the broker. Partition mutation quotas limit the number of partition mutations which clients are allowed to make per second. A partition mutation quota prevents Kafka clusters from being overwhelmed by concurrent topic operations. Partition mutations occur in response to the following types of user requests: Creating partitions for a new topic Adding partitions to an existing topic Deleting partitions from a topic You can configure a partition mutation quota to control the rate at which mutations are accepted for user requests. Using quotas for Kafka clients might be useful in a number of situations. Consider a wrongly configured Kafka producer which is sending requests at too high a rate. Such misconfiguration can cause a denial of service to other clients, so the problematic client ought to be blocked. By using a network limiting quota, it is possible to prevent this situation from significantly impacting other clients. AMQ Streams supports user-level quotas, but not client-level quotas. Example Kafka user quota configuration spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55 controllerMutationRate: 10 For more information about Kafka user quotas, refer to the Apache Kafka documentation . 13.2.100.2. KafkaUserQuotas schema properties Property Description consumerByteRate A quota on the maximum bytes per-second that each client group can fetch from a broker before the clients in the group are throttled. Defined on a per-broker basis. integer controllerMutationRate A quota on the rate at which mutations are accepted for the create topics request, the create partitions request and the delete topics request. The rate is accumulated by the number of partitions created or deleted. number producerByteRate A quota on the maximum bytes per-second that each client group can publish to a broker before the clients in the group are throttled. Defined on a per-broker basis. integer requestPercentage A quota on the maximum CPU utilization of each client group as a percentage of network and I/O threads. integer 13.2.101. KafkaUserTemplate schema reference Used in: KafkaUserSpec Full list of KafkaUserTemplate schema properties Specify additional labels and annotations for the secret created by the User Operator. An example showing the KafkaUserTemplate apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 # ... 13.2.101.1. KafkaUserTemplate schema properties Property Description secret Template for KafkaUser resources. The template allows users to specify how the Secret with password or TLS certificates is generated. ResourceTemplate 13.2.102. KafkaUserStatus schema reference Used in: KafkaUser Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer username Username. string secret The name of Secret where the credentials are stored. string 13.2.103. KafkaMirrorMaker schema reference Property Description spec The specification of Kafka MirrorMaker. KafkaMirrorMakerSpec status The status of Kafka MirrorMaker. KafkaMirrorMakerStatus 13.2.104. KafkaMirrorMakerSpec schema reference Used in: KafkaMirrorMaker Full list of KafkaMirrorMakerSpec schema properties Configures Kafka MirrorMaker. 13.2.104.1. include Use the include property to configure a list of topics that Kafka MirrorMaker mirrors from the source to the target Kafka cluster. The property allows any regular expression from the simplest case with a single topic name to complex patterns. For example, you can mirror topics A and B using "A|B" or all topics using "*". You can also pass multiple regular expressions separated by commas to the Kafka MirrorMaker. 13.2.104.2. KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec Use the KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec to configure source (consumer) and target (producer) clusters. Kafka MirrorMaker always works together with two Kafka clusters (source and target). To establish a connection, the bootstrap servers for the source and the target Kafka clusters are specified as comma-separated lists of HOSTNAME:PORT pairs. Each comma-separated list contains one or more Kafka brokers or a Service pointing to Kafka brokers specified as a HOSTNAME:PORT pair. 13.2.104.3. logging Kafka MirrorMaker has its own configurable logger: mirrormaker.root.logger MirrorMaker uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # ... logging: type: inline loggers: mirrormaker.root.logger: "INFO" # ... apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: mirror-maker-log4j.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.104.4. KafkaMirrorMakerSpec schema properties Property Description version The Kafka MirrorMaker version. Defaults to 2.8.0. Consult the documentation to understand the process required to upgrade or downgrade the version. string replicas The number of pods in the Deployment . integer image The docker image for the pods. string consumer Configuration of source cluster. KafkaMirrorMakerConsumerSpec producer Configuration of target cluster. KafkaMirrorMakerProducerSpec resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements whitelist The whitelist property has been deprecated, and should now be configured using spec.include . List of topics which are included for mirroring. This option allows any regular expression using Java-style regular expressions. Mirroring two topics named A and B is achieved by using the expression 'A|B' . Or, as a special case, you can mirror all topics using the regular expression '*'. You can also specify multiple regular expressions separated by commas. string include List of topics which are included for mirroring. This option allows any regular expression using Java-style regular expressions. Mirroring two topics named A and B is achieved by using the expression 'A|B' . Or, as a special case, you can mirror all topics using the regular expression '*'. You can also specify multiple regular expressions separated by commas. string jvmOptions JVM Options for pods. JvmOptions logging Logging configuration for MirrorMaker. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics tracing The configuration of tracing in Kafka MirrorMaker. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing template Template to specify how Kafka MirrorMaker resources, Deployments and Pods , are generated. KafkaMirrorMakerTemplate livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe 13.2.105. KafkaMirrorMakerConsumerSpec schema reference Used in: KafkaMirrorMakerSpec Full list of KafkaMirrorMakerConsumerSpec schema properties Configures a MirrorMaker consumer. 13.2.105.1. numStreams Use the consumer.numStreams property to configure the number of streams for the consumer. You can increase the throughput in mirroring topics by increasing the number of consumer threads. Consumer threads belong to the consumer group specified for Kafka MirrorMaker. Topic partitions are assigned across the consumer threads, which consume messages in parallel. 13.2.105.2. offsetCommitInterval Use the consumer.offsetCommitInterval property to configure an offset auto-commit interval for the consumer. You can specify the regular time interval at which an offset is committed after Kafka MirrorMaker has consumed data from the source Kafka cluster. The time interval is set in milliseconds, with a default value of 60,000. 13.2.105.3. config Use the consumer.config properties to configure Kafka options for the consumer. The config property contains the Kafka MirrorMaker consumer configuration options as keys, with values set in one of the following JSON types: String Number Boolean For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers . However, there are exceptions for options automatically configured and managed directly by AMQ Streams related to: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Interceptors Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: bootstrap.servers group.id interceptor.classes ssl. ( not including specific exceptions ) sasl. security. When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka MirrorMaker. Important The Cluster Operator does not validate keys or values in the provided config object. When an invalid configuration is provided, the Kafka MirrorMaker might not start or might become unstable. In such cases, the configuration in the KafkaMirrorMaker.spec.consumer.config object should be fixed and the Cluster Operator will roll out the new configuration for Kafka MirrorMaker. 13.2.105.4. groupId Use the consumer.groupId property to configure a consumer group identifier for the consumer. Kafka MirrorMaker uses a Kafka consumer to consume messages, behaving like any other Kafka consumer client. Messages consumed from the source Kafka cluster are mirrored to a target Kafka cluster. A group identifier is required, as the consumer needs to be part of a consumer group for the assignment of partitions. 13.2.105.5. KafkaMirrorMakerConsumerSpec schema properties Property Description numStreams Specifies the number of consumer stream threads to create. integer offsetCommitInterval Specifies the offset auto-commit interval in ms. Default value is 60000. integer bootstrapServers A list of host:port pairs for establishing the initial connection to the Kafka cluster. string groupId A unique string that identifies the consumer group this consumer belongs to. string authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The MirrorMaker consumer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map tls TLS configuration for connecting MirrorMaker to the cluster. KafkaMirrorMakerTls 13.2.106. KafkaMirrorMakerTls schema reference Used in: KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaMirrorMakerTls schema properties Configures TLS trusted certificates for connecting MirrorMaker to the cluster. 13.2.106.1. trustedCertificates Provide a list of secrets using the trustedCertificates property . 13.2.106.2. KafkaMirrorMakerTls schema properties Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array 13.2.107. KafkaMirrorMakerProducerSpec schema reference Used in: KafkaMirrorMakerSpec Full list of KafkaMirrorMakerProducerSpec schema properties Configures a MirrorMaker producer. 13.2.107.1. abortOnSendFailure Use the producer.abortOnSendFailure property to configure how to handle message send failure from the producer. By default, if an error occurs when sending a message from Kafka MirrorMaker to a Kafka cluster: The Kafka MirrorMaker container is terminated in OpenShift. The container is then recreated. If the abortOnSendFailure option is set to false , message sending errors are ignored. 13.2.107.2. config Use the producer.config properties to configure Kafka options for the producer. The config property contains the Kafka MirrorMaker producer configuration options as keys, with values set in one of the following JSON types: String Number Boolean For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for producers . However, there are exceptions for options automatically configured and managed directly by AMQ Streams related to: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Interceptors Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: bootstrap.servers interceptor.classes ssl. ( not including specific exceptions ) sasl. security. When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka MirrorMaker. Important The Cluster Operator does not validate keys or values in the provided config object. When an invalid configuration is provided, the Kafka MirrorMaker might not start or might become unstable. In such cases, the configuration in the KafkaMirrorMaker.spec.producer.config object should be fixed and the Cluster Operator will roll out the new configuration for Kafka MirrorMaker. 13.2.107.3. KafkaMirrorMakerProducerSpec schema properties Property Description bootstrapServers A list of host:port pairs for establishing the initial connection to the Kafka cluster. string abortOnSendFailure Flag to set the MirrorMaker to exit on a failed send. Default value is true . boolean authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The MirrorMaker producer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map tls TLS configuration for connecting MirrorMaker to the cluster. KafkaMirrorMakerTls 13.2.108. KafkaMirrorMakerTemplate schema reference Used in: KafkaMirrorMakerSpec Property Description deployment Template for Kafka MirrorMaker Deployment . DeploymentTemplate pod Template for Kafka MirrorMaker Pods . PodTemplate podDisruptionBudget Template for Kafka MirrorMaker PodDisruptionBudget . PodDisruptionBudgetTemplate mirrorMakerContainer Template for Kafka MirrorMaker container. ContainerTemplate serviceAccount Template for the Kafka MirrorMaker service account. ResourceTemplate 13.2.109. KafkaMirrorMakerStatus schema reference Used in: KafkaMirrorMaker Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer 13.2.110. KafkaBridge schema reference Property Description spec The specification of the Kafka Bridge. KafkaBridgeSpec status The status of the Kafka Bridge. KafkaBridgeStatus 13.2.111. KafkaBridgeSpec schema reference Used in: KafkaBridge Full list of KafkaBridgeSpec schema properties Configures a Kafka Bridge cluster. Configuration options relate to: Kafka cluster bootstrap address Security (Encryption, Authentication, and Authorization) Consumer configuration Producer configuration HTTP configuration 13.2.111.1. logging Kafka Bridge has its own configurable loggers: logger.bridge logger. <operation-id> You can replace <operation-id> in the logger. <operation-id> logger to set log levels for specific operations: createConsumer deleteConsumer subscribe unsubscribe poll assign commit send sendToPartition seekToBeginning seekToEnd seek healthy ready openapi Each operation is defined according OpenAPI specification, and has a corresponding API endpoint through which the bridge receives requests from HTTP clients. You can change the log level on each endpoint to create fine-grained logging information about the incoming and outgoing HTTP requests. Each logger has to be configured assigning it a name as http.openapi.operation. <operation-id> . For example, configuring the logging level for the send operation logger means defining the following: Kafka Bridge uses the Apache log4j2 logger implementation. Loggers are defined in the log4j2.properties file, which has the following default configuration for healthy and ready endpoints: The log level of all other operations is set to INFO by default. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. The logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. Default logging is used if the name or key is not set. Inside the ConfigMap, the logging configuration is described using log4j.properties . For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # ... logging: type: inline loggers: logger.bridge.level: "INFO" # enabling DEBUG just for send operation logger.send.name: "http.openapi.operation.send" logger.send.level: "DEBUG" # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: bridge-logj42.properties # ... Any available loggers that are not configured have their level set to OFF . If the Kafka Bridge was deployed using the Cluster Operator, changes to Kafka Bridge logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 13.2.111.2. KafkaBridgeSpec schema properties Property Description replicas The number of pods in the Deployment . integer image The docker image for the pods. string bootstrapServers A list of host:port pairs for establishing the initial connection to the Kafka cluster. string tls TLS configuration for connecting Kafka Bridge to the cluster. KafkaBridgeTls authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth http The HTTP related configuration. KafkaBridgeHttpConfig adminClient Kafka AdminClient related configuration. KafkaBridgeAdminClientSpec consumer Kafka consumer related configuration. KafkaBridgeConsumerSpec producer Kafka producer related configuration. KafkaBridgeProducerSpec resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements jvmOptions Currently not supported JVM Options for pods. JvmOptions logging Logging configuration for Kafka Bridge. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging enableMetrics Enable the metrics for the Kafka Bridge. Default is false. boolean livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe template Template for Kafka Bridge resources. The template allows users to specify how is the Deployment and Pods generated. KafkaBridgeTemplate tracing The configuration of tracing in Kafka Bridge. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing 13.2.112. KafkaBridgeTls schema reference Used in: KafkaBridgeSpec Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array 13.2.113. KafkaBridgeHttpConfig schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeHttpConfig schema properties Configures HTTP access to a Kafka cluster for the Kafka Bridge. The default HTTP configuration is for the Kafka Bridge to listen on port 8080. 13.2.113.1. cors As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin. To configure CORS, you define a list of allowed resource origins and HTTP access methods. For the origins, you can use a URL or a Java regular expression. Example Kafka Bridge HTTP configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... http: port: 8080 cors: allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" # ... 13.2.113.2. KafkaBridgeHttpConfig schema properties Property Description port The port which is the server listening on. integer cors CORS configuration for the HTTP Bridge. KafkaBridgeHttpCors 13.2.114. KafkaBridgeHttpCors schema reference Used in: KafkaBridgeHttpConfig Property Description allowedOrigins List of allowed origins. Java regular expressions can be used. string array allowedMethods List of allowed HTTP methods. string array 13.2.115. KafkaBridgeAdminClientSpec schema reference Used in: KafkaBridgeSpec Property Description config The Kafka AdminClient configuration used for AdminClient instances created by the bridge. map 13.2.116. KafkaBridgeConsumerSpec schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeConsumerSpec schema properties Configures consumer options for the Kafka Bridge as keys. The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: ssl. sasl. security. bootstrap.servers group.id When one of the forbidden options is present in the config property, it is ignored and a warning message will be printed to the Cluster Operator log file. All other options will be passed to Kafka Important The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge cluster might not start or might become unstable. Fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . Example Kafka Bridge consumer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... consumer: config: auto.offset.reset: earliest enable.auto.commit: true ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS # ... 13.2.116.1. KafkaBridgeConsumerSpec schema properties Property Description config The Kafka consumer configuration used for consumer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map 13.2.117. KafkaBridgeProducerSpec schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeProducerSpec schema properties Configures producer options for the Kafka Bridge as keys. The values can be one of the following JSON types: String Number Boolean You can specify and configure the options listed in the Apache Kafka configuration documentation for producers with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: ssl. sasl. security. bootstrap.servers When one of the forbidden options is present in the config property, it is ignored and a warning message will be printed to the Cluster Operator log file. All other options will be passed to Kafka Important The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge cluster might not start or might become unstable. Fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . Example Kafka Bridge producer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... producer: config: acks: 1 delivery.timeout.ms: 300000 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS # ... 13.2.117.1. KafkaBridgeProducerSpec schema properties Property Description config The Kafka producer configuration used for producer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map 13.2.118. KafkaBridgeTemplate schema reference Used in: KafkaBridgeSpec Property Description deployment Template for Kafka Bridge Deployment . DeploymentTemplate pod Template for Kafka Bridge Pods . PodTemplate apiService Template for Kafka Bridge API Service . InternalServiceTemplate podDisruptionBudget Template for Kafka Bridge PodDisruptionBudget . PodDisruptionBudgetTemplate bridgeContainer Template for the Kafka Bridge container. ContainerTemplate serviceAccount Template for the Kafka Bridge service account. ResourceTemplate 13.2.119. KafkaBridgeStatus schema reference Used in: KafkaBridge Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL at which external client applications can access the Kafka Bridge. string labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer 13.2.120. KafkaConnector schema reference Property Description spec The specification of the Kafka Connector. KafkaConnectorSpec status The status of the Kafka Connector. KafkaConnectorStatus 13.2.121. KafkaConnectorSpec schema reference Used in: KafkaConnector Property Description class The Class for the Kafka Connector. string tasksMax The maximum number of tasks for the Kafka Connector. integer config The Kafka Connector configuration. The following properties cannot be set: connector.class, tasks.max. map pause Whether the connector should be paused. Defaults to false. boolean 13.2.122. KafkaConnectorStatus schema reference Used in: KafkaConnector Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer connectorStatus The connector status, as reported by the Kafka Connect REST API. map tasksMax The maximum number of tasks for the Kafka Connector. integer topics The list of topics used by the Kafka Connector. string array 13.2.123. KafkaMirrorMaker2 schema reference Property Description spec The specification of the Kafka MirrorMaker 2.0 cluster. KafkaMirrorMaker2Spec status The status of the Kafka MirrorMaker 2.0 cluster. KafkaMirrorMaker2Status 13.2.124. KafkaMirrorMaker2Spec schema reference Used in: KafkaMirrorMaker2 Property Description version The Kafka Connect version. Defaults to 2.8.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. string replicas The number of pods in the Kafka Connect group. integer image The docker image for the pods. string connectCluster The cluster alias used for Kafka Connect. The alias must match a cluster in the list at spec.clusters . string clusters Kafka clusters for mirroring. KafkaMirrorMaker2ClusterSpec array mirrors Configuration of the MirrorMaker 2.0 connectors. KafkaMirrorMaker2MirrorSpec array resources The maximum limits for CPU and memory resources and the requested initial resources. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions jmxOptions JMX Options. KafkaJmxOptions logging Logging configuration for Kafka Connect. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging tracing The configuration of tracing in Kafka Connect. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger]. JaegerTracing template Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the Deployment , Pods and Service are generated. KafkaConnectTemplate externalConfiguration Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. ExternalConfiguration metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics 13.2.125. KafkaMirrorMaker2ClusterSpec schema reference Used in: KafkaMirrorMaker2Spec Full list of KafkaMirrorMaker2ClusterSpec schema properties Configures Kafka clusters for mirroring. 13.2.125.1. config Use the config properties to configure Kafka options. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. 13.2.125.2. KafkaMirrorMaker2ClusterSpec schema properties Property Description alias Alias used to reference the Kafka cluster. string bootstrapServers A comma-separated list of host:port pairs for establishing the connection to the Kafka cluster. string tls TLS configuration for connecting MirrorMaker 2.0 connectors to a cluster. KafkaMirrorMaker2Tls authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The MirrorMaker 2.0 cluster config. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map 13.2.126. KafkaMirrorMaker2Tls schema reference Used in: KafkaMirrorMaker2ClusterSpec Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array 13.2.127. KafkaMirrorMaker2MirrorSpec schema reference Used in: KafkaMirrorMaker2Spec Property Description sourceCluster The alias of the source cluster used by the Kafka MirrorMaker 2.0 connectors. The alias must match a cluster in the list at spec.clusters . string targetCluster The alias of the target cluster used by the Kafka MirrorMaker 2.0 connectors. The alias must match a cluster in the list at spec.clusters . string sourceConnector The specification of the Kafka MirrorMaker 2.0 source connector. KafkaMirrorMaker2ConnectorSpec heartbeatConnector The specification of the Kafka MirrorMaker 2.0 heartbeat connector. KafkaMirrorMaker2ConnectorSpec checkpointConnector The specification of the Kafka MirrorMaker 2.0 checkpoint connector. KafkaMirrorMaker2ConnectorSpec topicsPattern A regular expression matching the topics to be mirrored, for example, "topic1|topic2|topic3". Comma-separated lists are also supported. string topicsBlacklistPattern The topicsBlacklistPattern property has been deprecated, and should now be configured using .spec.mirrors.topicsExcludePattern . A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. string topicsExcludePattern A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. string groupsPattern A regular expression matching the consumer groups to be mirrored. Comma-separated lists are also supported. string groupsBlacklistPattern The groupsBlacklistPattern property has been deprecated, and should now be configured using .spec.mirrors.groupsExcludePattern . A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported. string groupsExcludePattern A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported. string 13.2.128. KafkaMirrorMaker2ConnectorSpec schema reference Used in: KafkaMirrorMaker2MirrorSpec Property Description tasksMax The maximum number of tasks for the Kafka Connector. integer config The Kafka Connector configuration. The following properties cannot be set: connector.class, tasks.max. map pause Whether the connector should be paused. Defaults to false. boolean 13.2.129. KafkaMirrorMaker2Status schema reference Used in: KafkaMirrorMaker2 Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. string connectorPlugins The list of connector plugins available in this Kafka Connect deployment. ConnectorPlugin array connectors List of MirrorMaker 2.0 connector statuses, as reported by the Kafka Connect REST API. map array labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer 13.2.130. KafkaRebalance schema reference Property Description spec The specification of the Kafka rebalance. KafkaRebalanceSpec status The status of the Kafka rebalance. KafkaRebalanceStatus 13.2.131. KafkaRebalanceSpec schema reference Used in: KafkaRebalance Property Description goals A list of goals, ordered by decreasing priority, to use for generating and executing the rebalance proposal. The supported goals are available at https://github.com/linkedin/cruise-control#goals . If an empty goals list is provided, the goals declared in the default.goals Cruise Control configuration parameter are used. string array skipHardGoalCheck Whether to allow the hard goals specified in the Kafka CR to be skipped in optimization proposal generation. This can be useful when some of those hard goals are preventing a balance solution being found. Default is false. boolean excludedTopics A regular expression where any matching topics will be excluded from the calculation of optimization proposals. This expression will be parsed by the java.util.regex.Pattern class; for more information on the supported formar consult the documentation for that class. string concurrentPartitionMovementsPerBroker The upper bound of ongoing partition replica movements going into/out of each broker. Default is 5. integer concurrentIntraBrokerPartitionMovements The upper bound of ongoing partition replica movements between disks within each broker. Default is 2. integer concurrentLeaderMovements The upper bound of ongoing partition leadership movements. Default is 1000. integer replicationThrottle The upper bound, in bytes per second, on the bandwidth used to move replicas. There is no limit by default. integer replicaMovementStrategies A list of strategy class names used to determine the execution order for the replica movements in the generated optimization proposal. By default BaseReplicaMovementStrategy is used, which will execute the replica movements in the order that they were generated. string array 13.2.132. KafkaRebalanceStatus schema reference Used in: KafkaRebalance Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer sessionId The session identifier for requests to Cruise Control pertaining to this KafkaRebalance resource. This is used by the Kafka Rebalance operator to track the status of ongoing rebalancing operations. string optimizationResult A JSON object describing the optimization result. map
[ "spec: config: ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" 1 ssl.enabled.protocols: \"TLSv1.2\" 2 ssl.protocol: \"TLSv1.2\" 3 ssl.endpoint.identification.algorithm: HTTPS 4", "create secret generic MY-SECRET --from-file= MY-TLS-CERTIFICATE-FILE.crt", "tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt", "tls: trustedCertificates: []", "resources: requests: cpu: 12 memory: 64Gi", "resources: limits: cpu: 12 memory: 64Gi", "resources: requests: cpu: 500m limits: cpu: 2.5", "resources: requests: memory: 512Mi limits: memory: 2Gi", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # image: my-org/my-image:latest # zookeeper: #", "readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5", "kind: ConfigMap apiVersion: v1 metadata: name: my-configmap data: my-key: | lowercaseOutputName: true rules: # Special cases and very specific rules - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value name: kafka_server_USD1_USD2 type: GAUGE labels: clientId: \"USD3\" topic: \"USD4\" partition: \"USD5\" # further configuration", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # zookeeper: #", "jvmOptions: \"-Xmx\": \"2g\" \"-Xms\": \"2g\"", "jvmOptions: \"-XX\": \"UseG1GC\": true \"MaxGCPauseMillis\": 20 \"InitiatingHeapOccupancyPercent\": 35 \"ExplicitGCInvokesConcurrent\": true", "-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC", "jvmOptions: javaSystemProperties: - name: javax.net.debug value: ssl", "jvmOptions: gcLoggingEnabled: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false # zookeeper: #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" zookeeper.connection.timeout.ms: 6000 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: # logging: type: inline loggers: kafka.root.logger.level: \"INFO\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #", "listeners: - name: plain port: 9092 type: internal tls: false", "# spec: kafka: # listeners: # - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #", "# spec: kafka: # listeners: # - name: external1 port: 9094 type: route tls: true #", "# spec: kafka: # listeners: # - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #", "# spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #", "# spec: kafka: # listeners: # - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #", "get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type==\"external\")].bootstrapServers}{\"\\n\"}'", "listeners: # - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2", "listeners: # - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key", "listeners: # - name: external port: 9094 type: loadbalancer tls: false configuration: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #", "listeners: # - name: external port: 9094 type: ingress tls: true configuration: class: nginx-internal #", "listeners: # - name: external port: 9094 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #", "listeners: # - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true #", "listeners: # - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: alternativeNames: - example.hostname1 - example.hostname2", "listeners: # - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com", "listeners: # - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myrouter.com brokers: - broker: 0 host: broker-0.myrouter.com - broker: 1 host: broker-1.myrouter.com - broker: 2 host: broker-2.myrouter.com", "listeners: # - name: external port: 9094 type: nodeport tls: true authentication: type: tls configuration: bootstrap: nodePort: 32100 brokers: - broker: 0 nodePort: 32000 - broker: 1 nodePort: 32001 - broker: 2 nodePort: 32002", "listeners: # - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: loadBalancerIP: 172.29.3.10 brokers: - broker: 0 loadBalancerIP: 172.29.3.1 - broker: 1 loadBalancerIP: 172.29.3.2 - broker: 2 loadBalancerIP: 172.29.3.3", "listeners: # - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" brokers: - broker: 0 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" - broker: 1 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" - broker: 2 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\"", "listeners: # - name: external port: 9094 type: route tls: true authentication: type: tls configuration: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: opa url: http://opa:8181/v1/data/kafka/allow allowOnError: false initialCacheCapacity: 1000 maximumCacheSize: 10000 expireAfterMs: 60000 superUsers: - CN=fred - sam - CN=edward #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: custom authorizerClass: io.mycompany.CustomAuthorizer superUsers: - CN=client_1 - user_2 - CN=client_3 # config: authorization.custom.property1=value1 authorization.custom.property2=value2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone config: # replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: authentication: type: \"password\" # zookeeper: #", "\" CLUSTER-NAME -kafka-0. CLUSTER-NAME -kafka-brokers\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: {} # zookeeper: #", "template: statefulset: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2", "template: pod: metadata: labels: label1: value1 annotations: anno1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect # spec: # template: pod: hostAliases: - ip: \"192.168.1.86\" hostnames: - \"my-host-1\" - \"my-host-2\" #", "template: externalBootstrapService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 perPodService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32", "template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1", "template: kafkaContainer: env: - name: EXAMPLE_ENV_1 value: example.env.one - name: EXAMPLE_ENV_2 value: example.env.two securityContext: runAsUser: 2000", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # zookeeper: # config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 1 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # zookeeper: # logging: type: inline loggers: zookeeper.root.logger: \"INFO\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # zookeeper: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: zookeeper-log4j.properties #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: topic-operator-log4j2.properties #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: user-operator-log4j2.properties #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: # tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi # cruiseControl: # tlsSidecar: image: my-org/my-image:latest resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logLevel: debug readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" ssl.endpoint.identification.algorithm: HTTPS #", "curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # logging: type: inline loggers: connect.root.logger.level: \"INFO\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: connect-logging.log4j #", "create secret generic MY-SECRET --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt --from-file= MY-PRIVATE.key", "authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key", "echo -n PASSWORD > MY-PASSWORD .txt", "create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt", "apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm", "authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field", "echo -n PASSWORD > MY-PASSWORD .txt", "create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt", "apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm", "authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token", "authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true", "apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # externalConfiguration: env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 # externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: \"3306\" database.user: \"USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbUsername}\" database.password: \"USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbPassword}\" database.server.id: \"184054\" #", "apiVersion: v1 kind: Secret metadata: name: mysecret labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: 1 ca.crt: # Public key of the client CA user.crt: # User certificate that contains the public key of the user user.key: # Private key of the user user.p12: # PKCS #12 archive file for storing certificates and keys user.password: # Password for protecting the PKCS #12 archive file", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: directory config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 1 # externalConfiguration: volumes: - name: connector-config secret: secretName: mysecret", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: security.protocol: SSL ssl.truststore.type: PEM ssl.truststore.location: \"USD{directory:/opt/kafka/external-configuration/connector-config:ca.crt}\" ssl.keystore.type: PEM ssl.keystore.location: USD{directory:/opt/kafka/external-configuration/connector-config:user.key}\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: type: docker 1 image: my-registry.io/my-org/my-connect-cluster:latest 2 pushSecret: my-registry-credentials 3 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: type: imagestream 1 image: my-connect-build:latest 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: 1 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.3.1.Final/debezium-connector-postgres-1.3.1.Final-plugin.tar.gz sha512sum: 962a12151bdf9a5a30627eebac739955a4fd95a08d373b86bdcea2b4d0c27dd6e1edd5cb548045e115e33a9e69b1b2a352bee24df035a0447cb820077af00c03 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.7.0/camel-telegram-kafka-connector-0.7.0-package.tar.gz sha512sum: a9b1ac63e3284bea7836d7d24d84208c49cdf5600070e6bd1535de654f6920b74ad950d51733e8020bf4187870699819f54ef5859c7846ee4081507f48873479 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: jar 1 url: https://my-domain.tld/my-jar.jar 2 sha512sum: 589...ab4 3 - type: jar url: https://my-domain.tld/my-jar2.jar #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: tgz 1 url: https://my-domain.tld/my-connector-archive.jar 2 sha512sum: 158...jg10 3 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: other 1 url: https://my-domain.tld/my-other-file.ext 2 sha512sum: 589...ab4 3 fileName: name-the-file.ext 4 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: prefix operation: Read", "spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55 controllerMutationRate: 10", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # logging: type: inline loggers: mirrormaker.root.logger: \"INFO\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: mirror-maker-log4j.properties #", "logger.send.name = http.openapi.operation.send logger.send.level = DEBUG", "logger.healthy.name = http.openapi.operation.healthy logger.healthy.level = WARN logger.ready.name = http.openapi.operation.ready logger.ready.level = WARN", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # logging: type: inline loggers: logger.bridge.level: \"INFO\" # enabling DEBUG just for send operation logger.send.name: \"http.openapi.operation.send\" logger.send.level: \"DEBUG\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: bridge-logj42.properties #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # http: port: 8080 cors: allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # consumer: config: auto.offset.reset: earliest enable.auto.commit: true ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" ssl.endpoint.identification.algorithm: HTTPS #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # producer: config: acks: 1 delivery.timeout.ms: 300000 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" ssl.endpoint.identification.algorithm: HTTPS #" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_openshift/api_reference-str
Chapter 10. Registering the Hypervisor and Virtual Machine
Chapter 10. Registering the Hypervisor and Virtual Machine Red Hat Enterprise Linux 6 and 7 require that every guest virtual machine is mapped to a specific hypervisor in order to ensure that every guest is allocated the same level of subscription service. To do this you need to install a subscription agent that automatically detects all guest Virtual Machines (VMs) on each KVM hypervisor that is installed and registered, which in turn will create a mapping file that sits on the host. This mapping file ensures that all guest VMs receive the following benefits: Subscriptions specific to virtual systems are readily available and can be applied to all of the associated guest VMs. All subscription benefits that can be inherited from the hypervisor are readily available and can be applied to all of the associated guest VMs. Note The information provided in this chapter is specific to Red Hat Enterprise Linux subscriptions only. If you also have a Red Hat Virtualization subscription, or a Red Hat Satellite subscription, you should also consult the virt-who information provided with those subscriptions. More information on Red Hat Subscription Management can also be found in the Red Hat Subscription Management Guide found on the customer portal. 10.1. Installing virt-who on the Host Physical Machine Register the KVM hypervisor Register the KVM Hypervisor by running the subscription-manager register [options] command in a terminal as the root user on the host physical machine. More options are available using the # subscription-manager register --help menu. In cases where you are using a user name and password, use the credentials that are known to the Subscription Manager application. If this is your very first time subscribing and you do not have a user account, contact customer support. For example to register the VM as 'admin' with 'secret' as a password, you would send the following command: Install the virt-who packages Install the virt-who packages, by running the following command on the host physical machine: Create a virt-who configuration file For each hypervisor, add a configuration file in the /etc/virt-who.d/ directory. At a minimum, the file must contain the following snippet: For more detailed information on configuring virt-who , see Section 10.1.1, "Configuring virt-who " . Start the virt-who service Start the virt-who service by running the following command on the host physical machine: Confirm virt-who service is receiving guest information At this point, the virt-who service will start collecting a list of domains from the host. Check the /var/log/rhsm/rhsm.log file on the host physical machine to confirm that the file contains a list of the guest VMs. For example: Procedure 10.1. Managing the subscription on the customer portal Subscribing the hypervisor As the virtual machines will be receiving the same subscription benefits as the hypervisor, it is important that the hypervisor has a valid subscription and that the subscription is available for the VMs to use. Log in to the Customer Portal Provide your Red Hat account credentials at the Red Hat Customer Portal to log in. Click the Systems link Go to the Systems section of the My Subscriptions interface. Select the hypervisor On the Systems page, there is a table of all subscribed systems. Click the name of the hypervisor (for example localhost.localdomain ). In the details page that opens, click Attach a subscription and select all the subscriptions listed. Click Attach Selected . This will attach the host's physical subscription to the hypervisor so that the guests can benefit from the subscription. Subscribing the guest virtual machines - first time use This step is for those who have a new subscription and have never subscribed a guest virtual machine before. If you are adding virtual machines, skip this step. To consume the subscription assigned to the hypervisor profile on the machine running the virt-who service, auto subscribe by running the following command in a terminal on the guest virtual machine. Subscribing additional guest virtual machines If you just subscribed a virtual machine for the first time, skip this step. If you are adding additional virtual machines, note that running this command will not necessarily re-attach the same subscriptions to the guest virtual machine. This is because removing all subscriptions then allowing auto-attach to resolve what is necessary for a given guest virtual machine may result in different subscriptions consumed than before. This may not have any effect on your system, but it is something you should be aware about. If you used a manual attachment procedure to attach the virtual machine, which is not described below, you will need to re-attach those virtual machines manually as the auto-attach will not work. Use the following command to first remove the subscriptions for the old guests, and then use the auto-attach to attach subscriptions to all the guests. Run these commands on the guest virtual machine. Confirm subscriptions are attached Confirm that the subscription is attached to the hypervisor by running the following command on the guest virtual machine: Output similar to the following will be displayed. Pay attention to the Subscription Details. It should say 'Subscription is current'. The ID for the subscription to attach to the system is displayed here. You will need this ID if you need to attach the subscription manually. Indicates if your subscription is current. If your subscription is not current, an error message appears. One example is Guest has not been reported on any host and is using a temporary unmapped guest subscription. In this case the guest needs to be subscribed. In other cases, use the information as indicated in Section 10.5.2, "I have subscription status errors, what do I do?" . Register additional guests When you install new guest VMs on the hypervisor, you must register the new VM and use the subscription attached to the hypervisor, by running the following commands on the guest virtual machine: 10.1.1. Configuring virt-who The virt-who service is configured using the following files: /etc/virt-who.conf - Contains general configuration information including the interval for checking connected hypervisors for changes. /etc/virt-who.d/ hypervisor_name .conf - Contains configuration information for a specific hypervisor. A web-based wizard is provided to generate hypervisor configuration files and the snippets required for virt-who.conf . To run the wizard, browse to Red Hat Virtualization Agent (virt-who) Configuration Helper on the Customer Portal. On the second page of the wizard, select the following options: Where does your virt-who report to? : Subscription Asset Manager Hypervisor Type : libvirt Follow the wizard to complete the configuration. If the configuration is performed correctly, virt-who will automatically provide the selected subscriptions to existing and future guests on the specified hypervisor. For more information on hypervisor configuration files, see the virt-who-config man page.
[ "subscription-manager register --username= admin --password= secret --auto-attach", "yum install virt-who", "[libvirt] type=libvirt", "systemctl start virt-who.service systemctl enable virt-who.service", "2015-05-28 12:33:31,424 DEBUG: Libvirt domains found: [{'guestId': '58d59128-cfbb-4f2c-93de-230307db2ce0', 'attributes': {'active': 0, 'virtWhoType': 'libvirt', 'hypervisorType': 'QEMU'}, 'state': 5}]", "subscription-manager attach --auto", "subscription-manager remove --all subscription-manager attach --auto", "subscription-manager list --consumed", "subscription-manager list --consumed +-------------------------------------------+ Consumed Subscriptions +-------------------------------------------+ Subscription Name: Awesome OS with unlimited virtual guests Provides: Awesome OS Server Bits SKU: awesomeos-virt-unlimited Contract: 0 Account: ######### Your account number ##### Serial: ######### Your serial number ###### Pool ID: XYZ123 Provides Management: No Active: True Quantity Used: 1 Service Level: Service Type: Status Details: Subscription is current Subscription Type: Starts: 01/01/2015 Ends: 12/31/2015 System Type: Virtual", "subscription-manager register subscription-manager attach --auto subscription-manager list --consumed" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/reg-virt-machine
Integrating RHEL systems directly with Windows Active Directory
Integrating RHEL systems directly with Windows Active Directory Red Hat Enterprise Linux 9 Joining RHEL hosts to AD and accessing resources in AD Red Hat Customer Content Services
[ "dnf install samba-common-tools realmd oddjob oddjob-mkhomedir sssd adcli krb5-workstation", "realm discover ad.example.com ad.example.com type: kerberos realm-name: AD.EXAMPLE.COM domain-name: ad.example.com configured: no server-software: active-directory client-software: sssd required-package: oddjob required-package: oddjob-mkhomedir required-package: sssd required-package: adcli required-package: samba-common", "realm join ad.example.com", "getent passwd [email protected] [email protected]:*:1450400500:1450400513:Administrator:/home/[email protected]:/bin/bash", "dnf install realmd oddjob oddjob-mkhomedir sssd adcli krb5-workstation", "realm join --automatic-id-mapping=no ad.example.com", "rm -f /var/lib/sss/db/*", "systemctl restart sssd", "getent passwd [email protected] [email protected]:*:10000:10000:Administrator:/home/Administrator:/bin/bash", "[domain/ ad.example.com ] id_provider = ad dyndns_refresh_interval = 43200 dyndns_update_ptr = false dyndns_ttl = 3600", "systemctl restart sssd", "[domain/ ad.example.com ] id_provider = ad dyndns_update = false", "[domain/ad.example.com] id_provider = ad ad_site = ExampleSite", "systemctl restart sssd", "update-crypto-policies --set DEFAULT:AD-SUPPORT", "dnf install realmd oddjob-mkhomedir oddjob samba-winbind-clients samba-winbind samba-common-tools samba-winbind-krb5-locator krb5-workstation", "dnf install samba", "mv /etc/samba/smb.conf /etc/samba/smb.conf.bak", "realm join --membership-software=samba --client-software=winbind ad.example.com", "[plugins] localauth = { module = winbind:/usr/lib64/samba/krb5/winbind_krb5_localauth.so enable_only = winbind }", "systemctl status winbind Active: active (running) since Tue 2018-11-06 19:10:40 CET; 15s ago", "systemctl enable --now smb", "getent passwd \"AD\\administrator\" AD\\administrator:*:10000:10000::/home/administrator@AD:/bin/bash", "getent group \"AD\\Domain Users\" AD\\domain users:x:10000:user1,user2", "chown \"AD\\administrator\":\"AD\\Domain Users\" /srv/samba/example.txt", "kinit [email protected]", "klist Ticket cache: KCM:0 Default principal: [email protected] Valid starting Expires Service principal 01.11.2018 10:00:00 01.11.2018 20:00:00 krbtgt/[email protected] renew until 08.11.2018 05:00:00", "wbinfo --all-domains BUILTIN SAMBA-SERVER AD", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "usr: administrator pwd: <password>", "--- - name: Active Directory integration hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Join an Active Directory ansible.builtin.include_role: name: rhel-system-roles.ad_integration vars: ad_integration_user: \"{{ usr }}\" ad_integration_password: \"{{ pwd }}\" ad_integration_realm: \"ad.example.com\" ad_integration_allow_rc4_crypto: false ad_integration_timesync_source: \"time_server.ad.example.com\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'getent passwd [email protected]' [email protected]:*:1450400500:1450400513:Administrator:/home/[email protected]:/bin/bash", "ad_maximum_machine_account_password_age = value_in_days", "systemctl restart sssd", "realm leave ad.example.com", "realm leave [ ad.example.com ] -U [ AD.EXAMPLE.COM\\user ]'", "realm discover [ ad.example.com ] ad.example.com type: kerberos realm-name: EXAMPLE.COM domain-name: example.com configured: no server-software: active-directory client-software: sssd required-package: oddjob required-package: oddjob-mkhomedir required-package: sssd required-package: adcli required-package: samba-common-tools", "domain_resolution_order = subdomain2.ad.example.com, subdomain1.ad.example.com, ad.example.com", "systemctl restart sssd", "id <user_from_subdomain2> uid=1916901142(user_from_subdomain2) gid=1916900513(domain users) groups=1916900513(domain users)", "realm permit --all", "realm permit [email protected] realm permit 'AD.EXAMPLE.COM\\aduser01'", "ssh [email protected]@ server_name [[email protected]@ server_name ~]USD", "ssh [email protected]@ server_name Authentication failed.", "realm deny --all", "realm list example.net type: kerberos realm-name: EXAMPLE.NET domain-name: example.net configured: kerberos-member server-software: active-directory client-software: sssd required-package: oddjob required-package: oddjob-mkhomedir required-package: sssd required-package: adcli required-package: samba-common-tools login-formats: %[email protected] login-policy: deny-any-login", "realm permit -x 'AD.EXAMPLE.COM\\aduser02'", "ssh [email protected]@ server_name Authentication failed.", "Oct 31 03:00:13 client1 sshd[124914]: pam_sss(sshd:account): Access denied for user aduser1: 6 (Permission denied) Oct 31 03:00:13 client1 sshd[124914]: Failed password for aduser1 from 127.0.0.1 port 60509 ssh2 Oct 31 03:00:13 client1 sshd[124914]: fatal: Access denied for user aduser1 by PAM account configuration [preauth]", "(Sat Oct 31 03:00:13 2020) [sssd[be[example.com]]] [ad_gpo_perform_hbac_processing] (0x0040): GPO access check failed: [1432158236](Host Access Denied) (Sat Oct 31 03:00:13 2020) [sssd[be[example.com]]] [ad_gpo_cse_done] (0x0040): HBAC processing failed: [1432158236](Host Access Denied} (Sat Oct 31 03:00:13 2020) [sssd[be[example.com]]] [ad_gpo_access_done] (0x0040): GPO-based access control failed.", "systemctl stop sssd", "[domain/ example.com ] ad_gpo_access_control= permissive", "systemctl restart sssd", "adcli create-msa --domain=production.example.com", "klist -k /etc/krb5.keytab.production.example.com Keytab name: FILE:/etc/krb5.keytab.production.example.com KVNO Principal ---- ------------------------------------------------------------ 2 [email protected] (aes256-cts-hmac-sha1-96) 2 [email protected] (aes128-cts-hmac-sha1-96)", "[domain/ production.example.com ] ldap_sasl_authid = [email protected] ldap_krb5_keytab = /etc/krb5.keytab.production.example.com krb5_keytab = /etc/krb5.keytab.production.example.com ad_domain = production.example.com krb5_realm = PRODUCTION.EXAMPLE.COM access_provider = ad", "[domain/ ad.example.com/production.example.com ] ldap_sasl_authid = [email protected] ldap_krb5_keytab = /etc/krb5.keytab.production.example.com krb5_keytab = /etc/krb5.keytab.production.example.com ad_domain = production.example.com krb5_realm = PRODUCTION.EXAMPLE.COM access_provider = ad", "kinit -k -t /etc/krb5.keytab.production.example.com 'CLIENT!S3AUSD' klist Ticket cache: KCM:0:54655 Default principal: [email protected] Valid starting Expires Service principal 11/22/2021 15:48:03 11/23/2021 15:48:03 krbtgt/[email protected]", "klist -k /etc/krb5.keytab.production.example.com Keytab name: FILE:/etc/krb5.keytab.production.example.com KVNO Principal ---- ------------------------------------------------------------ 2 [email protected] (aes256-cts-hmac-sha1-96) 2 [email protected] (aes128-cts-hmac-sha1-96)", "adcli update --domain=production.example.com --host-keytab=/etc/krb5.keytab.production.example.com --computer-password-lifetime=0", "klist -k /etc/krb5.keytab.production.example.com Keytab name: FILE:/etc/krb5.keytab.production.example.com KVNO Principal ---- ------------------------------------------------------------ 3 [email protected] (aes256-cts-hmac-sha1-96) 3 [email protected] (aes128-cts-hmac-sha1-96)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/integrating_rhel_systems_directly_with_windows_active_directory/index
Chapter 7. Operator SDK
Chapter 7. Operator SDK 7.1. Installing the Operator SDK CLI The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators. Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. See Developing Operators for full documentation on the Operator SDK. Note OpenShift Container Platform 4.10 supports Operator SDK v1.16.0. 7.1.1. Installing the Operator SDK CLI You can install the OpenShift SDK CLI tool on Linux. Prerequisites Go v1.16+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure Navigate to the OpenShift mirror site . From the latest 4.10 directory, download the latest version of the tarball for Linux. Unpack the archive: USD tar xvf operator-sdk-v1.16.0-ocp-linux-x86_64.tar.gz Make the file executable: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH . Tip To check your PATH : USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available: USD operator-sdk version Example output operator-sdk version: "v1.16.0-ocp", ... 7.2. Operator SDK CLI reference The Operator SDK command-line interface (CLI) is a development kit designed to make writing Operators easier. Operator SDK CLI syntax USD operator-sdk <command> [<subcommand>] [<argument>] [<flags>] See Developing Operators for full documentation on the Operator SDK. 7.2.1. bundle The operator-sdk bundle command manages Operator bundle metadata. 7.2.1.1. validate The bundle validate subcommand validates an Operator bundle. Table 7.1. bundle validate flags Flag Description -h , --help Help output for the bundle validate subcommand. --index-builder (string) Tool to pull and unpack bundle images. Only used when validating a bundle image. Available options are docker , which is the default, podman , or none . --list-optional List all optional validators available. When set, no validators are run. --select-optional (string) Label selector to select optional validators to run. When run with the --list-optional flag, lists available optional validators. 7.2.2. cleanup The operator-sdk cleanup command destroys and removes resources that were created for an Operator that was deployed with the run command. Table 7.2. cleanup flags Flag Description -h , --help Help output for the run bundle subcommand. --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. --timeout <duration> Time to wait for the command to complete before failing. The default value is 2m0s . 7.2.3. completion The operator-sdk completion command generates shell completions to make issuing CLI commands quicker and easier. Table 7.3. completion subcommands Subcommand Description bash Generate bash completions. zsh Generate zsh completions. Table 7.4. completion flags Flag Description -h, --help Usage help output. For example: USD operator-sdk completion bash Example output # bash completion for operator-sdk -*- shell-script -*- ... # ex: ts=4 sw=4 et filetype=sh 7.2.4. create The operator-sdk create command is used to create, or scaffold , a Kubernetes API. 7.2.4.1. api The create api subcommand scaffolds a Kubernetes API. The subcommand must be run in a project that was initialized with the init command. Table 7.5. create api flags Flag Description -h , --help Help output for the run bundle subcommand. 7.2.5. generate The operator-sdk generate command invokes a specific generator to generate code or manifests. 7.2.5.1. bundle The generate bundle subcommand generates a set of bundle manifests, metadata, and a bundle.Dockerfile file for your Operator project. Note Typically, you run the generate kustomize manifests subcommand first to generate the input Kustomize bases that are used by the generate bundle subcommand. However, you can use the make bundle command in an initialized project to automate running these commands in sequence. Table 7.6. generate bundle flags Flag Description --channels (string) Comma-separated list of channels to which the bundle belongs. The default value is alpha . --crds-dir (string) Root directory for CustomResoureDefinition manifests. --default-channel (string) The default channel for the bundle. --deploy-dir (string) Root directory for Operator manifests, such as deployments and RBAC. This directory is different from the directory passed to the --input-dir flag. -h , --help Help for generate bundle --input-dir (string) Directory from which to read an existing bundle. This directory is the parent of your bundle manifests directory and is different from the --deploy-dir directory. --kustomize-dir (string) Directory containing Kustomize bases and a kustomization.yaml file for bundle manifests. The default path is config/manifests . --manifests Generate bundle manifests. --metadata Generate bundle metadata and Dockerfile. --output-dir (string) Directory to write the bundle to. --overwrite Overwrite the bundle metadata and Dockerfile if they exist. The default value is true . --package (string) Package name for the bundle. -q , --quiet Run in quiet mode. --stdout Write bundle manifest to standard out. --version (string) Semantic version of the Operator in the generated bundle. Set only when creating a new bundle or upgrading the Operator. Additional resources See Bundling an Operator and deploying with Operator Lifecycle Manager for a full procedure that includes using the make bundle command to call the generate bundle subcommand. 7.2.5.2. kustomize The generate kustomize subcommand contains subcommands that generate Kustomize data for the Operator. 7.2.5.2.1. manifests The generate kustomize manifests subcommand generates or regenerates Kustomize bases and a kustomization.yaml file in the config/manifests directory, which are used to build bundle manifests by other Operator SDK commands. This command interactively asks for UI metadata, an important component of manifest bases, by default unless a base already exists or you set the --interactive=false flag. Table 7.7. generate kustomize manifests flags Flag Description --apis-dir (string) Root directory for API type definitions. -h , --help Help for generate kustomize manifests . --input-dir (string) Directory containing existing Kustomize files. --interactive When set to false , if no Kustomize base exists, an interactive command prompt is presented to accept custom metadata. --output-dir (string) Directory where to write Kustomize files. --package (string) Package name. -q , --quiet Run in quiet mode. 7.2.6. init The operator-sdk init command initializes an Operator project and generates, or scaffolds , a default project directory layout for the given plugin. This command writes the following files: Boilerplate license file PROJECT file with the domain and repository Makefile to build the project go.mod file with project dependencies kustomization.yaml file for customizing manifests Patch file for customizing images for manager manifests Patch file for enabling Prometheus metrics main.go file to run Table 7.8. init flags Flag Description --help, -h Help output for the init command. --plugins (string) Name and optionally version of the plugin to initialize the project with. Available plugins are ansible.sdk.operatorframework.io/v1 , go.kubebuilder.io/v2 , go.kubebuilder.io/v3 , and helm.sdk.operatorframework.io/v1 . --project-version Project version. Available values are 2 and 3-alpha , which is the default. 7.2.7. run The operator-sdk run command provides options that can launch the Operator in various environments. 7.2.7.1. bundle The run bundle subcommand deploys an Operator in the bundle format with Operator Lifecycle Manager (OLM). Table 7.9. run bundle flags Flag Description --index-image (string) Index image in which to inject a bundle. The default image is quay.io/operator-framework/upstream-opm-builder:latest . --install-mode <install_mode_value> Install mode supported by the cluster service version (CSV) of the Operator, for example AllNamespaces or SingleNamespace . --timeout <duration> Install timeout. The default value is 2m0s . --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. -h , --help Help output for the run bundle subcommand. Additional resources See Operator group membership for details on possible install modes. 7.2.7.2. bundle-upgrade The run bundle-upgrade subcommand upgrades an Operator that was previously installed in the bundle format with Operator Lifecycle Manager (OLM). Table 7.10. run bundle-upgrade flags Flag Description --timeout <duration> Upgrade timeout. The default value is 2m0s . --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. -h , --help Help output for the run bundle subcommand. 7.2.8. scorecard The operator-sdk scorecard command runs the scorecard tool to validate an Operator bundle and provide suggestions for improvements. The command takes one argument, either a bundle image or directory containing manifests and metadata. If the argument holds an image tag, the image must be present remotely. Table 7.11. scorecard flags Flag Description -c , --config (string) Path to scorecard configuration file. The default path is bundle/tests/scorecard/config.yaml . -h , --help Help output for the scorecard command. --kubeconfig (string) Path to kubeconfig file. -L , --list List which tests are available to run. -n , --namespace (string) Namespace in which to run the test images. -o , --output (string) Output format for results. Available values are text , which is the default, and json . -l , --selector (string) Label selector to determine which tests are run. -s , --service-account (string) Service account to use for tests. The default value is default . -x , --skip-cleanup Disable resource cleanup after tests are run. -w , --wait-time <duration> Seconds to wait for tests to complete, for example 35s . The default value is 30s . Additional resources See Validating Operators using the scorecard tool for details about running the scorecard tool.
[ "tar xvf operator-sdk-v1.16.0-ocp-linux-x86_64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.16.0-ocp\",", "operator-sdk <command> [<subcommand>] [<argument>] [<flags>]", "operator-sdk completion bash", "bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/cli_tools/operator-sdk
11.4.2.3. Specifying a Local Lockfile
11.4.2.3. Specifying a Local Lockfile Lockfiles are very useful with Procmail to ensure that more than one process does not try to alter a message simultaneously. Specify a local lockfile by placing a colon ( : ) after any flags on a recipe's first line. This creates a local lockfile based on the destination file name plus whatever has been set in the LOCKEXT global environment variable. Alternatively, specify the name of the local lockfile to be used with this recipe after the colon.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-email-procmail-recipes-lockfile
Chapter 10. Monitoring your brokers
Chapter 10. Monitoring your brokers 10.1. Viewing brokers in Fuse Console You can configure an Operator-based broker deployment to use Fuse Console for OpenShift instead of the AMQ Management Console. When you have configured your broker deployment appropriately, Fuse Console discovers the brokers and displays them on a dedicated Artemis tab. You can view the same broker runtime data that you do in the AMQ Management Console. You can also perform the same basic management operations, such as creating addresses and queues. The following procedure describes how to configure the Custom Resource (CR) instance for a broker deployment to enable Fuse Console for OpenShift to discover and display brokers in the deployment. Important Viewing brokers from Fuse Console is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites Fuse Console for OpenShift must be deployed to an OCP cluster, or to a specific namespace on that cluster. If you have deployed the console to a specific namespace, your broker deployment must be in the same namespace, to enable the console to discover the brokers. Otherwise, it is sufficient for Fuse Console and the brokers to be deployed on the same OCP cluster. For more information on installing Fuse Online on OCP, see Installing and Operating Fuse Online on OpenShift Container Platform . You must have already created a broker deployment. For example, to learn how to use a Custom Resource (CR) instance to create a basic Operator-based deployment, see Section 3.4.1, "Deploying a basic broker instance" . Procedure Open the CR instance that you used for your broker deployment. For example, the CR for a basic deployment might resemble the following: apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.8.0 deploymentPlan: size: 4 image: registry.redhat.io/amq7/amq-broker:7.8 ... In the deploymentPlan section, add the jolokiaAgentEnabled and managementRBACEnabled properties and specify values, as shown below. apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.8.0 deploymentPlan: size: 4 image: registry.redhat.io/amq7/amq-broker:7.8 ... jolokiaAgentEnabled: true managementRBACEnabled: false jolokiaAgentEnabled Specifies whether Fuse Console can discover and display runtime data for the brokers in the deployment. To use Fuse Console, set the value to true . managementRBACEnabled Specifies whether role-based access control (RBAC) is enabled for the brokers in the deployment. You must set the value to false to use Fuse Console because Fuse Console uses its own role-based access control. Important If you set the value of managementRBACEnabled to false to enable use of Fuse Console, management MBeans for the brokers no longer require authorization. You should not use the AMQ management console while managementRBACEnabled is set to false because this potentially exposes all management operations on the brokers to unauthorized use. Save the CR instance. Switch to the project in which you previously created your broker deployment. At the command line, apply the change. USD oc apply -f <path/to/custom-resource-instance> .yaml In Fuse Console, to view Fuse applications, click the Online tab. To view running brokers, in the left navigation menu, click Artemis . Additional resources For more information about using Fuse Console for OpenShift, see Monitoring and managing Red Hat Fuse applications on OpenShift . To learn about using AMQ Management Console to view and manage brokers in the same way that you can in Fuse Console, see Managing brokers using AMQ Management Console . 10.2. Monitoring broker runtime metrics using Prometheus The sections that follow describe how to configure the Prometheus metrics plugin for AMQ Broker on OpenShift Container Platform. You can use the plugin to monitor and store broker runtime metrics. You might also use a graphical tool such as Grafana to configure more advanced visualizations and dashboards of the data that the Prometheus plugin collects. Note The Prometheus metrics plugin enables you to collect and export broker metrics in Prometheus format . However, Red Hat does not provide support for installation or configuration of Prometheus itself, nor of visualization tools such as Grafana. If you require support with installing, configuring, or running Prometheus or Grafana, visit the product websites for resources such as community support and documentation. 10.2.1. Metrics overview To monitor the health and performance of your broker instances, you can use the Prometheus plugin for AMQ Broker to monitor and store broker runtime metrics. The AMQ Broker Prometheus plugin exports the broker runtime metrics to Prometheus format, enabling you to use Prometheus itself to visualize and run queries on the data. You can also use a graphical tool, such as Grafana, to configure more advanced visualizations and dashboards for the metrics that the Prometheus plugin collects. The metrics that the plugin exports to Prometheus format are described below. Broker metrics artemis_address_memory_usage Number of bytes used by all addresses on this broker for in-memory messages. artemis_address_memory_usage_percentage Memory used by all the addresses on this broker as a percentage of the global-max-size parameter. artemis_connection_count Number of clients connected to this broker. artemis_total_connection_count Number of clients that have connected to this broker since it was started. Address metrics artemis_routed_message_count Number of messages routed to one or more queue bindings. artemis_unrouted_message_count Number of messages not routed to any queue bindings. Queue metrics artemis_consumer_count Number of clients consuming messages from a given queue. artemis_delivering_durable_message_count Number of durable messages that a given queue is currently delivering to consumers. artemis_delivering_durable_persistent_size Persistent size of durable messages that a given queue is currently delivering to consumers. artemis_delivering_message_count Number of messages that a given queue is currently delivering to consumers. artemis_delivering_persistent_size Persistent size of messages that a given queue is currently delivering to consumers. artemis_durable_message_count Number of durable messages currently in a given queue. This includes scheduled, paged, and in-delivery messages. artemis_durable_persistent_size Persistent size of durable messages currently in a given queue. This includes scheduled, paged, and in-delivery messages. artemis_messages_acknowledged Number of messages acknowledged from a given queue since the queue was created. artemis_messages_added Number of messages added to a given queue since the queue was created. artemis_message_count Number of messages currently in a given queue. This includes scheduled, paged, and in-delivery messages. artemis_messages_killed Number of messages removed from a given queue since the queue was created. The broker kills a message when the message exceeds the configured maximum number of delivery attempts. artemis_messages_expired Number of messages expired from a given queue since the queue was created. artemis_persistent_size Persistent size of all messages (both durable and non-durable) currently in a given queue. This includes scheduled, paged, and in-delivery messages. artemis_scheduled_durable_message_count Number of durable, scheduled messages in a given queue. artemis_scheduled_durable_persistent_size Persistent size of durable, scheduled messages in a given queue. artemis_scheduled_message_count Number of scheduled messages in a given queue. artemis_scheduled_persistent_size Persistent size of scheduled messages in a given queue. For higher-level broker metrics that are not listed above, you can calculate these by aggregating lower-level metrics. For example, to calculate total message count, you can aggregate the artemis_message_count metrics from all queues in your broker deployment. For an on-premise deployment of AMQ Broker, metrics for the Java Virtual Machine (JVM) hosting the broker are also exported to Prometheus format. This does not apply to a deployment of AMQ Broker on OpenShift Container Platform. 10.2.2. Enabling the Prometheus plugin for a running broker deployment This procedure shows how to enable the Prometheus plugin for a broker Pod in a given deployment. Prerequisites You can enable the Prometheus plugin for a broker Pod created with application templates or with the AMQ Broker Operator. However, your deployed broker must use the broker container image for AMQ Broker 7.5 or later. For more information about ensuring that your broker deployment uses the latest broker container image, see Chapter 9, Upgrading a template-based broker deployment . Procedure Log in to the OpenShift Container Platform web console with administrator privileges for the project that contains your broker deployment. In the web console, click Home Projects (OpenShift Container Platform 4.5 or later) or the drop-down list in the top-left corner (OpenShift Container Platform 3.11). Choose the project that contains your broker deployment. To see the StatefulSets or DeploymentConfigs in your project, click: Workloads StatefulSets or Workloads DeploymentConfigs (OpenShift Container Platform 4.5 or later). Applications StatefulSets or Applications Deployments (OpenShift Container Platform 3.11). Click the StatefulSet or DeploymentConfig that corresponds to your broker deployment. To access the environment variables for your broker deployment, click the Environment tab. Add a new environment variable, AMQ_ENABLE_METRICS_PLUGIN . Set the value of the variable to true . When you set the AMQ_ENABLE_METRICS_PLUGIN environment variable, OpenShift restarts each broker Pod in the StatefulSet or DeploymentConfig. When there are multiple Pods in the deployment, OpenShift restarts each Pod in turn. When each broker Pod restarts, the Prometheus plugin for that broker starts to gather broker runtime metrics. Note The AMQ_ENABLE_METRICS_PLUGIN environment variable is included by default in the application templates for AMQ Broker 7.5 or later. To enable the plugin for each broker in a new template-based deployment, ensure that the value of AMQ_ENABLE_METRICS_PLUGIN is set to true when deploying the application template. Additional resources For information about installing the latest application templates, see Section 7.2, "Installing the image streams and application templates" 10.2.3. Accessing Prometheus metrics for a running broker Pod This procedure shows how to access Prometheus metrics for a running broker Pod. Prerequisites You must have already enabled the Prometheus plugin for your broker Pod. See Section 10.2.2, "Enabling the Prometheus plugin for a running broker deployment" . Procedure For the broker Pod whose metrics you want to access, you need to identify the Route you previously created to connect the Pod to the AMQ Broker management console. The Route name forms part of the URL needed to access the metrics. Click Networking Routes (OpenShift Container Platform 4.5 or later) or Applications Routes (OpenShift Container Platform 3.11). For your chosen broker Pod, identify the Route created to connect the Pod to the AMQ Broker management console. Under Hostname , note the complete URL that is shown. For example: To access Prometheus metrics, in a web browser, enter the previously noted Route name appended with "/metrics" . For example: Note If your console configuration does not use SSL, specify http in the URL. In this case, DNS resolution of the host name directs traffic to port 80 of the OpenShift router. If your console configuration uses SSL, specify https in the URL. In this case, your browser defaults to port 443 of the OpenShift router. This enables a successful connection to the console if the OpenShift router also uses port 443 for SSL traffic, which the router does by default. 10.3. Monitoring broker runtime data using JMX This example shows how to monitor a broker using the Jolokia REST interface to JMX. Prerequisites This example builds upon Preparing a template-based broker deployment . Completion of Deploying a basic broker is recommended. Procedure Get the list of running pods: Run the oc logs command: Run your query to monitor your broker for MaxConsumers :
[ "apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.8.0 deploymentPlan: size: 4 image: registry.redhat.io/amq7/amq-broker:7.8", "apiVersion: broker.amq.io/v2alpha4 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: version: 7.8.0 deploymentPlan: size: 4 image: registry.redhat.io/amq7/amq-broker:7.8 jolokiaAgentEnabled: true managementRBACEnabled: false", "oc project <project-name>", "oc apply -f <path/to/custom-resource-instance> .yaml", "http://rte-console-access-pod1.openshiftdomain", "http://rte-console-access-pod1.openshiftdomain/metrics", "oc get pods NAME READY STATUS RESTARTS AGE broker-amq-1-ftqmk 1/1 Running 0 14d", "oc logs -f broker-amq-1-ftqmk Running /amq-broker-71-openshift image, version 1.3-5 INFO: Loading '/opt/amq/bin/env' INFO: Using java '/usr/lib/jvm/java-1.8.0/bin/java' INFO: Starting in foreground, this is just for debugging purposes (stop process by pressing CTRL+C) INFO | Listening for connections at: tcp://broker-amq-1-ftqmk:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600 INFO | Connector openwire started INFO | Starting OpenShift discovery agent for service broker-amq-tcp transport type tcp INFO | Network Connector DiscoveryNetworkConnector:NC:BrokerService[broker-amq-1-ftqmk] started INFO | Apache ActiveMQ 5.11.0.redhat-621084 (broker-amq-1-ftqmk, ID:broker-amq-1-ftqmk-41433-1491445582960-0:1) started INFO | For help or more information please see: http://activemq.apache.org WARN | Store limit is 102400 mb (current store usage is 0 mb). The data directory: /opt/amq/data/kahadb only has 9684 mb of usable space - resetting to maximum available disk space: 9684 mb WARN | Temporary Store limit is 51200 mb, whilst the temporary data directory: /opt/amq/data/broker-amq-1-ftqmk/tmp_storage only has 9684 mb of usable space - resetting to maximum available 9684 mb.", "curl -k -u admin:admin http://console-broker.amq-demo.apps.example.com/console/jolokia/read/org.apache.activemq.artemis:broker=%22broker%22,component=addresses,address=%22TESTQUEUE%22,subcomponent=queues,routing-type=%22anycast%22,queue=%22TESTQUEUE%22/MaxConsumers {\"request\":{\"mbean\":\"org.apache.activemq.artemis:address=\\\"TESTQUEUE\\\",broker=\\\"broker\\\",component=addresses,queue=\\\"TESTQUEUE\\\",routing-type=\\\"anycast\\\",subcomponent=queues\",\"attribute\":\"MaxConsumers\",\"type\":\"read\"},\"value\":-1,\"timestamp\":1528297825,\"status\":200}" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_amq_broker_on_openshift/assembly_br-broker-monitoring_broker-ocp
Validating and troubleshooting the deployed cloud
Validating and troubleshooting the deployed cloud Red Hat OpenStack Services on OpenShift 18.0 Validating and troubleshooting a deployed Red Hat OpenStack Services on OpenShift environment OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/validating_and_troubleshooting_the_deployed_cloud/index
Preface
Preface You can use the order process features of Automation Services Catalog to integrate with an Information Technology Service Management (ITSM) systems such as ServiceNow. Important Support for automation services catalog is no longer available for Ansible Automation Platform from 2.4.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/integrating_automation_services_catalog_with_your_it_service_management_itsm_systems/pr01
Chapter 13. Networking
Chapter 13. Networking Error handling in the output of the dhcp-script has been improved Previously, any error in the output of the dhcp-script was ignored. With this update the output of the script is logged on the add , old , del , arp-add , arp-del , tftp actions. As a result, errors are displayed while dnsmasq is running. Note that the lease-init action happens only at a start of Dnsmasq . With this update, only a summary of the output is logged and not the standard error output, which passes to the systemd service for logging. (BZ# 1188259 ) Network namespace isolation has been added to ipset Previously, ipset entries were visible and could be modified by any network namespace. This update provides ipset with isolation per network namespace. As a result, ipset configuration is separated for each namespace. (BZ#1226051) NetworkManager now supports multiple routing tables to enable source routing This update adds a new table attribute for IPv4 and IPv6 routes which can be configured manually by the user. For each manual static route, a routing table can be selected. As a result, configuring the table of a route has the effect of configuring the route in that table. Additionally, the default routing table of a connection profile can be configured via the new ipv4.route-table and ipv6.route-table settings for IPv4 and IPv6 respectively. These settings determine in which table the routes are placed, except manual routes that explicitly overwrite this setting. (BZ# 1436531 ) nftables rebased to version 0.8 The nftables packages have been upgraded to version 0.8, which provides a number of bug fixes and enhancements over the version. Notable changes include: Support hashing of any arbitrary key combination has been added. Support to set non-byte bound packet header fields, including checksum adjustment has been added. Variable reference for set element definitions and variable definitions from element commands can now be used. Support to flush set has been added. Support for logging flags has been added. Support for tc classid parser has been added. Endianness problems with link layer address have been solved. Parser to keep map flag around on definition has been fixed. The time datatype now uses milliseconds, as the kernel expects. (BZ# 1472261 ) Persistent DHCP client behavior added to NetworkManager With this update, the ipv4.dhcp-timeout property can be set to either the maximum for a 32-bit integer (MAXINT32) value or to the infinity value. As a result, NetworkManager never stops trying to get or renew a lease from a DHCP server until it is successful. (BZ#1350830) NetworkManager exposes new properties to expose team options Previously, NetworkManager applied team configuration to connections providing a JSON string to the config property, which was the only property available in the team setting. This update adds new properties in NetworkManager matching one to one the team configuration options. As a result, the configuration may be provided either through a unique JSON string in the NetworkManager config property or assigning values to the new team properties. Any configuration change applied in config is reflected to the new team properties and vice versa. The correct configuration of team link-watchers and team.runner is now enforced in NetworkManager . Wrong or unknown link-watcher and team.runner configurations result in the full team connection being rejected. Note that when changing the brand new runner property, all the properties related to specific runners are reset to default. (BZ# 1398925 ) Packets mark is now reflected on replies Previously, when receiving a connection request on a closed port, an error packet was sent back to the client. When the incoming connection was marked with some firewall rules, the generated error message did not have this mark because this functionality was not implemented in the kernel. With this update, the generated error message has the same marking as the incoming packet that tried to initiate the connection. (BZ#1469857) New Socket timestamping options for NTP This update adds the SOF_TIMESTAMPING_OPT_PKTINFO and SOF_TIMESTAMPING_OPT_TX_SWHW socket timestamping options for hardware timestamping with bonding and other virtual interfaces in Network Time Protocol (NTP) implementations, such as chrony. (BZ#1421164) iproute2 rebased to version 4.11.0 The iproute2 package has been upgraded to upstream version 4.11.0, which provides a number of bug fixes and enhancements. Notably, the ip tool includes: Support for JSON output to various commands has been added. Support for more interface type attributes has been added. Support for colored output has been added. Support for the label , dev options and the rule objects in ip-monitor state. Support for selectors in the ip-rule command has been added. Additionally, notable improvements for the tc utility include: Support for the bash-completion function for tc . The vlan action in tc has been introduced. The extended mode in the pedit action has been introduced. Stream Control Transmission Protocol (SCTP) support in the csum action has been added. For other tools: Support for extended statistics in the lnstat tool has been added. Support for SCTP in the nstat utility has been added. (BZ#1435647) The tc-pedit action now supports offset relative to Layer 2 and Layer 4 The tc-pedit action allows modification of packet data. This update adds support for specifying the offset options relative to the Layer 2 , 3 and 4 headers to tc-pedit . This makes pedit header handling more robust and flexible. As a result, editing Ethernet header is more convenient and accessing the Layer 4 header works independently to the Layer 3 header size. (BZ# 1468280 ) Features backported to iproute A number of enhancements have been backported to the iproute package. Notable changes include: Pipeline debug support has been added to the devlink tool via the dpipe subcommand. Hardware offload status is now available in the tc filter, indicated by the in_hw or not_in_hw flags. Support for IPv6 in the tc pedit action has been added. Setting and retrieving eswitch encapsulation support has been added to the devlink tool. Matching capabilities of the tc flower filter have been enhanced: Support for matching on TCP flags. Support for matching on the type-of-service (ToS) and the time-to-live (TTL) fields in the IP header. (BZ#1456539) The Geneve driver rebased to version 4.12 The Geneve driver has been updated to version 4.12, which provides several bug fixes and enhancements for Open vSwitch (OVS) or Open Virtual Network (OVN) deployments using Geneve tunneling. (BZ#1467288) A control switch added for VXLAN and GENEVE offloading This update adds a new control switch to the ethtool utility to enable or disable offloading of the VXLAN and GENEVE tunnels to network cards. This enhancement enables easier debugging of issues with the VXLAN or GENEVE tunnels. In addition, you can resolve issues caused by offloading these types of tunnels to network cards by using ethtool to disable the feature. (BZ#1308630) unbound rebased to version 1.6.6 The unbound packages have been rebased to upstream version 1.6.6, which provides a number of bug fixes and enhancements over the version. Notable changes are as follows: DNS Query Name (QNAME) minimisation according to RFC 7816 has been implemented. A new max-udp-size configuration option has been added; its default value is 4096 . A new DNS64 module and a new dns64-prefix option have been added. New insecure_add and insecure_remove commands have been added to the unbound-control utility for administration of negative trust anchors. The unbound-control utility is now capable of bulk addition and removal of local zones and local data. To perform these actions, use the local_zones , local_zones_remove , local_datas , and local_datas_remove commands. The libldns is no longer a dependency of libunbound and will not be installed with it. A new so-reuseport: option is now available for distributing queries evenly over threads on Linux. New Resource Record types have been added: CDS , CDNSKEY , URI (according to RFC 7553), CSYNC , and OPENPGPKEY . New local-zone types have been added: inform to log a message with a client IP and inform_deny to log a query and drop the answer to it. Remote control over local sockets is now available; use the control-interface: /path/sock and control-use-cert: no commands. A new ip-transparent: configuration option has been added for binding to non-local IP addresses. A new ip-freebind: configuration option has been added for binding to an IP address while the interface or address is down. A new harden-algo-downgrade: configuration option has been added. The following domains are now blocked by default: onion (according to RFC 7686), test , and invalid (according to RFC 6761). A user-defined pluggable event API for the libunbound library has been added. To set the working directory for Unbound , either use the directory: dir with the include: file statement in the unbound.conf file, which ensures that the includes are relative to the directory, or use the chroot command with an absolute path. Fine-grained localzone control has been implemented with the following options: define-tag , access-control-tag , access-control-tag-action , access-control-tag-data , local-zone-tag , and local-zone-override . A new outgoing-interface: netblock/64 IPv6 option has been added to use Linux freebind feature for every query with a random 64-bit local part. Logging of DNS replies has been added, which is similar to query logs. Trust anchor signaling has been implemented that uses key tag query and trustanchor.unbound CH TXT queries. Extension mechanisms for DNS (EDNS) Client subnet has been iplemented. ipsecmod , an opportunistic IPsec support module, has been implemented. (BZ#1251440) DHCP now supports standard dynamic DNS updates With this update, the DHCP server allows updating DNS records by using a standard protocol. As a result, DHCP supports standard dynamic DNS updates as described in RFC 2136: https://tools.ietf.org/html/rfc2136 . (BZ# 1394727 ) DDNS now supports additional algorithms Previously, the dhcpd daemon supported only the HMAC-MD5 hashing algorithm which is considered insecure for critical applications. As a consequence, the Dynamic DNS (DDNS) updates were potentially insecure. This update adds support for additional algorithms: HMAC-SHA1 , HMAC-SHA224 , HMAC-SHA256 , HMAC-SHA384 , or HMAC-SHA512 . (BZ# 1396985 ) IPTABLES_SYSCTL_LOAD_LIST now supports the sysctl.d files The sysctl settings in IPTABLES_SYSCTL_LOAD_LIST are reloaded by the iptables init script when the iptables service is restarted. The modified settings were previously searched only in the /etc/sysctl.conf file. This update adds support for searching these modifications in the /etc/sysctl.d/ directory as well. As a result, the user-provided files in /etc/sysctl.d/ are now correctly taken into account when the iptables service is restarted. (BZ#1402021) SCTP now supports MSG_MORE The MSG_MORE flag is set to buffer small pieces of data until a full packet is ready for transmission or until a call is performed that does not specify this flag. This update adds support for MSG_MORE on the Stream Control Transmission Protocol (SCTP). As a result, small data chunks can be buffered and sent as a full packet. (BZ#1409365) MACsec rebased to version 4.13 The Media Access Control Security (MACsec) driver has been upgraded to upstream version 4.13, which provides a number of bug fixes and enhancements over the version. Notable enhancements include: Generic Receive Offload (GRO) and Receive Packet Steering (RPS) are enabled on MACsec devices. The MODULE_ALIAS_GENL_FAMILY module has been added. This helps tools such as wpa_supplicant to start even if the module is not loaded yet. (BZ#1467335) Enhanced performance when using the mlx5 driver in Open vSwitch The Open vSwitch (OVS) application enables Virtual Machines to communicate with each other and the physical network. OVS resides in the hypervisor and switching is based on twelve tuple matching on flows. However, the OVS software-based solution is very CPU-intensive. This affects the system performance and prevents using the fully available bandwidth. With this update, the mlx5 driver for Mellanox ConnectX-4, ConnectX-4 Lx, and ConnectX-5 adapters can offload OVS. The Mellanox Accelerated Switching And Packet Processing (ASAP2) Direct technology enables offloading OVS by handling the OVS data-plane in Mellanox ConnectX-4 and later network interface cards with Mellanox Embedded Switch or eSwitch, while maintaining an unmodified OVS control-plane. As a result, the OVS performance is significantly higher and less CPU-intensive. The current actions supported by ASAP2 Direct include packet parsing and matching, forward, drop along with VLAN push/pop, or VXLAN encapsulation and decapsulation. (BZ#1456687) The Netronome NFP Ethernet driver now supports the representor netdev feature This update backports the representor netdev feature for the Netronome NFP Ethernet driver to Red Hat Enterprise Linux 7.5. This enhancement enables the driver: To receive and transmit fallback traffic To be used in Open vSwitch To support programming flows to the NFP hardware by using the TC-Flower utility (BZ#1454745) Support for offloading TC-Flower actions This update adds support for offloading the TC-Flower classifier and actions related to offloading of Open vSwitch. This allows acceleration of Open vSwitch using Netronome SmartNICs. (BZ#1468286) DNS stub resolver improvements The DNS stub resolver in the glibc package has been updated to the upstream glibc version 2.26. Notable improvements and bug fixes include: Changes to the /etc/resolv.conf file are now automatically recognized and applied to running programs. To restore the behavior, add the no-reload option to the options line in /etc/resolv.conf . Note that depending on system configuration, the /etc/resolv.conf file might be automatically overwritten as part of the configuration of the networking subsystem, removing the no-reload option. The limit of six search domain entries is removed. You can now specify any number of domains with the search directive in /etc/resolv.conf . Note that additional entries may add significant overhead to DNS processing; consider running a local caching resolver if the number of entries exceeds three. The handling of various boundary conditions in the getaddrinfo() function is fixed. Very long lines in the /etc/hosts file (including comments) no longer affect lookup results from other lines. Unexpected terminations related to stack exhaustion on systems with certain /etc/hosts configuration no longer occur. Previously, when the rotate option was enabled in /etc/resolv.conf , the first DNS query of a new process was always sent to the second name server configured in the name server list in /etc/resolv.conf . This behavior has been changed, and the first DNS query now randomly selects a name server from the list. Subsequent queries rotate through the available name servers, as before. (BZ# 677316 , BZ# 1432085 , BZ#1257639, BZ# 1452034 , BZ#1329674)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/new_features_networking
4.156. lldpad
4.156. lldpad 4.156.1. RHBA-2011:1604 - lldpad bug fix and enhancement update An updated lldpad package that fixes several bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The lldpad package provides the Linux user space daemon and configuration tool for Intel's Link Layer Discovery Protocol (LLDP) agent with Enhanced Ethernet support. The lldpad package has been upgraded to upstream version 0.9.43, which provides a number of bug fixes and enhancements over the version. (BZ# 731407 ) Bug Fixes BZ# 749057 The Brocade 8000 Fibre Channel Forwarder (FCF) switch with FabOs 6.4.2b failed to process the CEE TLV frame on fabric session startup (started by the llpad). As a consequence, the Brocade 8000 Fibre Channel Forwarder (FCF) switch with FabOs 6.4.2b terminated the connection and subsequent fabric logins failed when IEEE 802.1Qaz DCBX was enabled. With this update, the llptool utility can configure lldpad not to use the CEE TLV frame for the fabric session initiation (for the eth3 device, the initiator should issue the "lldptool -T -i eth3 -V IEEE-DCBX mode=reset" command) and the problem no longer occurs. BZ# 694639 The lldpad service triggered excessive timeout events every second. This caused the service to consume excess resources. Now, the lldpad service has been switched from polling-based to a demand-based model. This prevents excessive timeout event generation and ensures that the service consumes only the expected resources. BZ# 733123 The lldpad utility did not detect the maximum number of traffic classes supported by a device correctly. This resulted in an invalid or incorrect hardware configuration. Now, the utility detects the maximum number of traffic classes correctly. BZ# 720825 , BZ# 744133 The Edge Control Protocol (ECP) could not verify whether a port lookup was successful when running Virtual Discovery and Configuration Protocol (VDP) on bonded devices because VDP does not support bonded devices. As a consequence, the LLDP agent terminated unexpectedly with a segmentation fault. With this update, VDP is no longer initialized on bonded devices and the crash no longer occurs. BZ# 647211 The lldpad utility failed to initialize correctly on the Intel 82599ES 10 Gigabit Ethernet Controller (Niantic) with virtual functions enabled and returned a message that there were too many neighbors. With this update, lldpad initializes correctly and the problem no longer occurs. BZ# 735313 Prior to this update, a user with non-superuser permissions could start the lldpad service. With this update the lldpad init scripts have been modified and a user with non-superuser permissions can no longer start the service. BZ# 683837 The init script did not perform a line feed when returning the output of a service command. With this update, the init script has been recoded and the output of the service command is correct. BZ# 720730 The get_bcn() function returned without freeing the nlh variable, which caused a memory leak. The function has been modified and the memory leak no longer occurs. BZ# 741359 The lldpad daemon failed to detect that a NIC (Network Interface Card) had the offloaded DCBX (Data Center Bridging eXchange) stack implemented in its firmware. As a consequence, the lldp packets were send by both, the daemon and the NIC. With this update, the lldpad daemon no longer sends the packets if a NIC driver implements the offloaded DCBX stack. BZ# 749943 The lldpad utility incorrectly accessed memory. With this update, the utility accesses the memory correctly. Enhancement BZ# 695550 The lldpad package now supports the 802.1Qaz standard (Enhanced Transmission Selection for Bandwidth Sharing Between Traffic Classes). Users are advised to upgrade to this updated lldpad package, which fixes these bugs and adds these enhancements. 4.156.2. RHBA-2012:0694 - lldpad bug fix update Updated lldpad packages that fix one bug are now available for Red Hat Enterprise Linux 6. The lldpad packages provides the Linux user space daemon and configuration tool for Intel's Link Layer Discovery Protocol (LLDP) agent with Enhanced Ethernet support. Bug Fix BZ# 822377 The lldpad tool is initially invoked by initrd during the boot process to support Fibre Channel over Ethernet (FCoE) boot from a Storage Area Network (SAN). The runtime lldpad init script did not kill lldpad before restarting it after system boot. Consequently, lldpad could not be started normally after system boot. With this update, the lldpad init script now contains the "-k" option to terminate the first instance of lldpad that was started during system boot. All users of lldpad are advised to upgrade to these updated packages, which fix this bug. 4.156.3. RHBA-2012:0728 - lldpad bug fix update Updated lldpad packages that fix one bug are now available for Red Hat Enterprise Linux 6. The lldpad packages provide the Linux user space daemon and configuration tool for Intel's Link Layer Discovery Protocol (LLDP) agent with Enhanced Ethernet support. Bug Fix BZ# 828683 Previously, dcbtool commands could, under certain circumstances, fail to enable the Fibre Channel over Ethernet (FCoE) application type-length-values (TLV) for a selected interface during the installation process. Consequently, various important features might have not been enabled (for example priority flow control, or PFC) by the Data Center Bridging eXchange (DCBX) peer. To prevent such problems, application-specific parameters (such as the FCoE application TLV) in DCBX are now enabled by default. All users of lldpad are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/lldpad
Chapter 24. Updated Drivers
Chapter 24. Updated Drivers Storage Driver Updates The Microsemi Smart Family Controller driver (smartpqi.ko.xz) has been updated to version 1.1.4-115. The HP Smart Array Controller driver (hpsa.ko.xz) has been updated to version 3.4.20-125-RH1. The Emulex LightPulse Fibre Channel SCSI driver (lpfc.ko.xz) has been updated to version 0:12.0.0.5. The Avago MegaRAID SAS driver (megaraid_sas.ko.xz) has been updated to version 07.705.02.00-rh1. The Dell PERC2, 2/Si, 3/Si, 3/Di, Adaptec Advanced Raid Products, HP NetRAID-4M, IBM ServeRAID & ICP SCS driver (aacraid.ko.xz) has been updated to version 1.2.1[50877]-custom. The QLogic FastLinQ 4xxxx iSCSI Module driver (qedi.ko.xz) has been updated to version 8.33.0.20. The QLogic Fibre Channel HBA driver (qla2xxx.ko.xz) has been updated to version 10.00.00.06.07.6-k. The QLogic QEDF 25/40/50/100Gb FCoE driver (qedf.ko.x) has been updated to version 8.33.0.20. The LSI MPT Fusion SAS 3.0 Device driver (mpt3sas.ko.xz) has been updated to version 16.100.01.00. The LSI MPT Fusion SAS 2.0 Device driver (mpt2sas.ko.xz) has been updated to version 20.103.01.00. Network Driver Updates The Realtek RTL8152/RTL8153 Based USB Ethernet Adapters driver (r8152.ko.xz) has been updated to version v1.09.9. The VMware vmxnet3 virtual NIC driver (vmxnet3.ko.xz) has been updated to version 1.4.14.0-k. The Intel(R) Ethernet Connection XL710 Network driver (i40e.ko.xz) has been updated to version 2.3.2-k. The Intel(R) 10 Gigabit Virtual Function Network driver (ixgbevf.ko.xz) has been updated to version 4.1.0-k-rh7.6. The Intel(R) 10 Gigabit PCI Express Network driver (ixgbe.ko.xz) has been updated to version 5.1.0-k-rh7.6. The Intel(R) XL710 X710 Virtual Function Network driver (i40evf.ko.xz) has been updated to version 3.2.2-k. The Intel(R) Ethernet Switch Host Interface driver (fm10k.ko.xz) has been updated to version 0.22.1-k. The Broadcom BCM573xx network driver (bnxt_en.ko.xz) has been updated to version 1.9.1. The Cavium LiquidIO Intelligent Server Adapter driver (liquidio.ko.xz) has been updated to version 1.7.2. The Cavium LiquidIO Intelligent Server Adapter Virtual Function driver (liquidio_vf.ko.xz) has been updated to version 1.7.2. The Elastic Network Adapter (ENA) driver (ena.ko.xz) has been updated to version 1.5.0K. The aQuantia Corporation Network driver (atlantic.ko.xz) has been updated to version 2.0.2.1-kern. The QLogic FastLinQ 4xxxx Ethernet driver (qede.ko.xz) has been updated to version 8.33.0.20. The QLogic FastLinQ 4xxxx Core Module driver (qed.ko.xz) has been updated to version 8.33.0.20. The Cisco VIC Ethernet NIC driver (enic.ko.xz) has been updated to version 2.3.0.53. Graphics Driver and Miscellaneous Driver Updates The VMware Memory Control (Balloon) driver (vmw_balloon.ko.xz) has been updated to version 1.4.1.0-k. The HP watchdog driver (hpwdt.ko.xz) has been updated to version 1.4.0-RH1k. The standalone drm driver for the VMware SVGA device (vmwgfx.ko.xz) has been updated to version 2.14.1.0.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/updated_drivers
function::ulonglong_arg
function::ulonglong_arg Name function::ulonglong_arg - Return function argument as 64-bit value Synopsis Arguments n index of argument to return Description Return the value of argument n as a 64-bit value. (Same as longlong_arg.)
[ "ulonglong_arg:long(n:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ulonglong-arg
Chapter 1. Ansible Automation Platform containerized installation
Chapter 1. Ansible Automation Platform containerized installation Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. This guide helps you to understand the installation requirements and processes behind the containerized version of Ansible Automation Platform. Note Upgrades from 2.4 Containerized Ansible Automation Platform Tech Preview to 2.5 Containerized Ansible Automation Platform are not supported at this time. 1.1. Tested deployment topologies Red Hat tests Ansible Automation Platform 2.5 with a defined set of topologies to give you opinionated deployment options. The supported topologies include infrastructure topology diagrams, tested system configurations, example inventory files, and network ports information. For containerized Ansible Automation Platform, there are two infrastructure topology shapes: Growth - (All-in-one) Intended for organizations that are getting started with Ansible Automation Platform. This topology allows for smaller footprint deployments. Enterprise - Intended for organizations that require Ansible Automation Platform deployments to have redundancy or higher compute for large volumes of automation. This is a more future-proofed scaled out architecture. For more information about the tested deployment topologies for containerized Ansible Automation Platform, see Container topologies in Tested deployment models . 1.2. System requirements Use this information when planning your installation of containerized Ansible Automation Platform. Prerequisites A non-root user for the Red Hat Enterprise Linux host, with sudo or other Ansible supported privilege escalation (sudo is recommended). This user is responsible for the installation of containerized Ansible Automation Platform. SSH public key authentication for the non-root user (if installing on remote hosts). For guidelines on setting up SSH public key authentication for the non-root user, see How to configure SSH public key authentication for passwordless login . If doing a self contained local VM based installation, you can use ansible_connection=local . Internet access from the Red Hat Enterprise Linux host if you are using the default online installation method. The appropriate network ports are open if a firewall is in place. For more information about the ports to open, see Container topologies in Tested deployment models . 1.2.1. Ansible Automation Platform system requirements Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform. Table 1.1. Base system requirements Type Description Subscription Valid Red Hat Ansible Automation Platform subscription Valid Red Hat Enterprise Linux subscription (to consume the BaseOS and AppStream repositories) Operating system Red Hat Enterprise Linux 9.2 or later minor versions of Red Hat Enterprise Linux 9 CPU architecture x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) Ansible-core Ansible-core version 2.16 or later Browser A currently supported version of Mozilla Firefox or Google Chrome. Database PostgreSQL 15 Each virtual machine (VM) has the following system requirements: Table 1.2. Virtual machine requirements Requirement Minimum requirement RAM 16 GB CPUs 4 Local disk 60 GB Disk IOPS 3000 Note If performing a bundled installation of the growth topology with hub_seed_collections=true , then 32 GB RAM is recommended. Note that with this configuration the install time is going to increase and can take 45 or more minutes alone to complete seeding the collections. 1.2.2. PostgreSQL requirements Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support. 1.3. Preparing the Red Hat Enterprise Linux host for containerized installation Containerized Ansible Automation Platform runs the component services as Podman based containers on top of a Red Hat Enterprise Linux host. Prepare the Red Hat Enterprise Linux host to ensure a successful installation. Procedure Log in to the Red Hat Enterprise Linux host as your non-root user. Set a hostname that is a fully qualified domain name (FQDN): sudo hostnamectl set-hostname <your_hostname> Register your Red Hat Enterprise Linux host with subscription-manager : sudo subscription-manager register Run sudo dnf repolist to validate that only the BaseOS and AppStream repositories are set up and enabled on the host: USD sudo dnf repolist Updating Subscription Management repositories. repo id repo name rhel-9-for-x86_64-appstream-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-baseos-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Ensure that only these repositories are available to the Red Hat Enterprise Linux host. For more information about managing custom repositories, see Managing custom software repositories . Ensure that the host has DNS configured and can resolve host names and IP addresses by using a fully qualified domain name (FQDN). This is essential to ensure services can talk to one another. Install ansible-core : sudo dnf install -y ansible-core Optional: You can install additional utilities that can be useful for troubleshooting purposes, for example wget , git-core , rsync , and vim : sudo dnf install -y wget git-core rsync vim Optional: To have the installation program automatically pick up and apply your Ansible Automation Platform subscription manifest license, follow the steps in Obtaining a manifest file . Additional resources For more information about registering your RHEL system, see Getting Started with RHEL System Registration . For information about configuring unbound DNS, see Setting up an unbound DNS server . For information about configuring DNS using BIND, see Setting up and configuring a BIND DNS server . For more information about ansible-core , see Ansible Core Documentation . 1.4. Downloading Ansible Automation Platform Choose the installation program you need based on your Red Hat Enterprise Linux environment internet connectivity and download the installation program to your Red Hat Enterprise Linux host. Procedure Download the latest installer .tar file from the Ansible Automation Platform download page . For online installations: Ansible Automation Platform 2.5 Containerized Setup For offline or bundled installations: Ansible Automation Platform 2.5 Containerized Setup Bundle Copy the installation program .tar file and the optional manifest .zip file onto your Red Hat Enterprise Linux host. Decide where you want the installation program to reside on the file system. Installation related files are created under this location and require at least 10 GB for the initial installation. Unpack the installation program .tar file into your installation directory, and go to the unpacked directory. To unpack the online installer: USD tar xfvz ansible-automation-platform-containerized-setup-<version>.tar.gz To unpack the offline or bundled installer: USD tar xfvz ansible-automation-platform-containerized-setup-bundle-<version>-<arch_name>.tar.gz 1.5. Configuring the inventory file You can control the installation of Ansible Automation Platform with inventory files. Inventory files define the hosts and containers used and created, variables for components, and other information needed to customize the installation. Example inventory files are provided in this document that you can copy and change to quickly get started. Inventory files for the growth topology and enterprise topology are also found in the downloaded installer package: The default one named inventory is for the enterprise topology pattern. If you want to deploy the growth topology or all-in-one pattern you need to copy over or use the inventory-growth file instead. Additionally, you can find example inventory files in Container topologies in Tested deployment models . To use the example inventory files, replace the < > placeholders with your specific variables, and update the host names. Refer to the README.md file in the installation directory for more information about optional and required variables. 1.5.1. Inventory file for online installation for containerized growth topology (all-in-one) Use the example inventory file to perform an online installation for the containerized growth topology (all-in-one): # This is the Ansible Automation Platform installer inventory file intended for the container growth deployment topology. # This inventory file expects to be run from the host where Ansible Automation Platform will be installed. # Consult the Ansible Automation Platform product documentation about this topology's tested hardware configuration. # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/container-topologies # # Consult the docs if you are unsure what to add # For all optional variables consult the included README.md # or the Ansible Automation Platform documentation: # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation # This section is for your platform gateway hosts # ----------------------------------------------------- [automationgateway] aap.example.org # This section is for your automation controller hosts # ------------------------------------------------- [automationcontroller] aap.example.org # This section is for your automation hub hosts # ----------------------------------------------------- [automationhub] aap.example.org # This section is for your Event-Driven Ansible controller hosts # ----------------------------------------------------- [automationeda] aap.example.org # This section is for the Ansible Automation Platform database # -------------------------------------- [database] aap.example.org [all:vars] # Ansible ansible_connection=local # Common variables # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-general-inventory-variables # ----------------------------------------------------- postgresql_admin_username=postgres postgresql_admin_password=<set your own> registry_username=<your RHN username> registry_password=<your RHN password> redis_mode=standalone # Platform gateway # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-gateway-variables # ----------------------------------------------------- gateway_admin_password=<set your own> gateway_pg_host=aap.example.org gateway_pg_password=<set your own> # Automation controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-controller-variables # ----------------------------------------------------- controller_admin_password=<set your own> controller_pg_host=aap.example.org controller_pg_password=<set your own> # Automation hub # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-hub-variables # ----------------------------------------------------- hub_admin_password=<set your own> hub_pg_host=aap.example.org hub_pg_password=<set your own> hub_seed_collections=false # Event-Driven Ansible controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#event-driven-ansible-controller # ----------------------------------------------------- eda_admin_password=<set your own> eda_pg_host=aap.example.org eda_pg_password=<set your own> 1.5.2. Inventory file for online installation for containerized enterprise topology Use the example inventory file to perform an online installation for the containerized enterprise topology: # This is the Ansible Automation Platform enterprise installer inventory file # Consult the docs if you are unsure what to add # For all optional variables consult the included README.md # or the Red Hat documentation: # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation # This section is for your platform gateway hosts # ----------------------------------------------------- [automationgateway] gateway1.example.org gateway2.example.org # This section is for your automation controller hosts # ----------------------------------------------------- [automationcontroller] controller1.example.org controller2.example.org # This section is for your Ansible Automation Platform execution hosts # ----------------------------------------------------- [execution_nodes] hop1.example.org receptor_type='hop' exec1.example.org exec2.example.org # This section is for your automation hub hosts # ----------------------------------------------------- [automationhub] hub1.example.org hub2.example.org # This section is for your Event-Driven Ansible controller hosts # ----------------------------------------------------- [automationeda] eda1.example.org eda2.example.org [redis] gateway1.example.org gateway2.example.org hub1.example.org hub2.example.org eda1.example.org eda2.example.org [all:vars] # Common variables # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-general-inventory-variables # ----------------------------------------------------- postgresql_admin_username=<set your own> postgresql_admin_password=<set your own> registry_username=<your RHN username> registry_password=<your RHN password> # Platform gateway # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-gateway-variables # ----------------------------------------------------- gateway_admin_password=<set your own> gateway_pg_host=externaldb.example.org gateway_pg_database=<set your own> gateway_pg_username=<set your own> gateway_pg_password=<set your own> # Automation controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-controller-variables # ----------------------------------------------------- controller_admin_password=<set your own> controller_pg_host=externaldb.example.org controller_pg_database=<set your own> controller_pg_username=<set your own> controller_pg_password=<set your own> # Automation hub # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-hub-variables # ----------------------------------------------------- hub_admin_password=<set your own> hub_pg_host=externaldb.example.org hub_pg_database=<set your own> hub_pg_username=<set your own> hub_pg_password=<set your own> # Event-Driven Ansible controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#event-driven-ansible-controller # ----------------------------------------------------- eda_admin_password=<set your own> eda_pg_host=externaldb.example.org eda_pg_database=<set your own> eda_pg_username=<set your own> eda_pg_password=<set your own> Redis configuration for an enterprise topology 6 VMs are required for a Redis high availability (HA) compatible deployment. When installing Ansible Automation Platform with the containerized installer, Redis can be colocated on any Ansible Automation Platform component VMs of your choice except for execution nodes or the PostgreSQL database. They might also be assigned VMs specifically for Redis use. By default the redis_mode is set to cluster . redis_mode=cluster For more information about Redis, see Caching and queueing system in Planning your installation . 1.5.3. Additional information for configuring your inventory file For more information about the variables you can use to configure your inventory file, see Inventory file variables Offline or bundled installation To perform an offline installation, add the following under the [all:vars] group: bundle_install=true # The bundle directory must include /bundle in the path bundle_dir=<full path to the bundle directory> Configuring a HAProxy load balancer To configure a HAProxy load balancer in front of platform gateway with a custom CA cert, set the following inventory file variables under the [all:vars] group: custom_ca_cert=<path_to_cert_crt> gateway_main_url=<https://load_balancer_url> Note HAProxy SSL passthrough mode is not supported with platform gateway. Configuring Network File System (NFS) storage for automation hub NFS is a type of shared storage that is supported in containerized installations. Shared storage is required when installing more than one instance of automation hub with a file storage backend. When installing a single instance of the automation hub, shared storage is optional. To configure shared storage for automation hub, set the following variable in the inventory file, ensuring your NFS share has read, write, and execute permissions: hub_shared_data_path=<path_to_nfs_share> To change the mount options for your NFS share, use the hub_shared_data_mount_opts variable. This variable is optional and the default value is rw,sync,hard . Configuring Amazon S3 storage for automation hub Amazon S3 storage is a type of object storage that is supported in containerized installations. When using an AWS S3 storage backend, set hub_storage_backend to s3 . The AWS S3 bucket needs to exist before running the installation program. The variables you can use to configure this storage backend type in your inventory file are: hub_s3_access_key hub_s3_secret_key hub_s3_bucket_name hub_s3_extra_settings Extra parameters can be passed through an Ansible hub_s3_extra_settings dictionary. For example, you can set the following parameters: hub_s3_extra_settings: AWS_S3_MAX_MEMORY_SIZE: 4096 AWS_S3_REGION_NAME: eu-central-1 AWS_S3_USE_SSL: True For more information about the list of parameters, see django-storages documentation - Amazon S3 . Configuring Azure Blob Storage for automation hub Azure Blob storage is a type of object storage that is supported in containerized installations. When using an Azure blob storage backend, set hub_storage_backend to azure . The Azure container needs to exist before running the installation program. The variables you can use to configure this storage backend type in your inventory file are: hub_azure_account_key hub_azure_account_name hub_azure_container hub_azure_extra_settings Extra parameters can be passed through an Ansible hub_azure_extra_settings dictionary. For example, you can set the following parameters: hub_azure_extra_settings: AZURE_LOCATION: foo AZURE_SSL: True AZURE_URL_EXPIRATION_SECS: 60 For more information about the list of parameters, see django-storages documentation - Azure Storage . Loading an automation controller license file To define the location of your automation controller license file, set the following variable in the inventory file: controller_license_file=<full_path_to_your_manifest_zip_file> 1.5.4. Setting up an external (customer supported) database Important When using an external database with Ansible Automation Platform, you must create and maintain that database. Ensure that you clear your external database when uninstalling Ansible Automation Platform. Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support. During configuration of an external database, you must check the external database coverage. For more information, see Red Hat Ansible Automation Platform Database Scope of Coverage . There are two possible scenarios for setting up an external database: An external database with PostgreSQL admin credentials An external database without PostgreSQL admin credentials 1.5.4.1. Setting up an external database with PostgreSQL admin credentials If you have PostgreSQL admin credentials, you can supply them in the inventory file and the installation program creates the PostgreSQL users and databases for each component for you. The PostgreSQL admin account must have SUPERUSER privileges. To configure the PostgreSQL admin credentials, add the following variables to the inventory file under the [all:vars] group: postgresql_admin_username=<set your own> postgresql_admin_password=<set your own> 1.5.4.2. Setting up an external database without PostgreSQL admin credentials If you do not have PostgreSQL admin credentials, then PostgreSQL users and databases need to be created for each component (platform gateway, automation controller, automation hub, and Event-Driven Ansible) before running the installation program. Procedure Connect to a PostgreSQL compliant database server with a user that has SUPERUSER privileges. # psql -h <hostname> -U <username> -p <port_number> For example: # psql -h db.example.com -U superuser -p 5432 Create the user with a password and ensure the CREATEDB role is assigned to the user. For more information, see Database Roles . CREATE USER <username> WITH PASSWORD <password> CREATEDB; For example: CREATE USER hub_user WITH PASSWORD <password> CREATEDB; Create the database and add the user you created as the owner. CREATE DATABASE <database_name> OWNER <username>; For example: CREATE DATABASE hub_database OWNER hub_user; When you have created the PostgreSQL users and databases for each component, you can supply them in the inventory file under the [all:vars] group. # Platform gateway gateway_pg_host=aap.example.org gateway_pg_database=<set your own> gateway_pg_username=<set your own> gateway_pg_password=<set your own> # Automation controller controller_pg_host=aap.example.org controller_pg_database=<set your own> controller_pg_username=<set your own> controller_pg_password=<set your own> # Automation hub hub_pg_host=aap.example.org hub_pg_database=<set your own> hub_pg_username=<set your own> hub_pg_password=<set your own> # Event-Driven Ansible eda_pg_host=aap.example.org eda_pg_database=<set your own> eda_pg_username=<set your own> eda_pg_password=<set your own> 1.5.4.3. Enabling the hstore extension for the automation hub PostgreSQL database Added in Ansible Automation Platform 2.5, the database migration script uses hstore fields to store information, therefore the hstore extension must be enabled in the automation hub PostgreSQL database. This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server. If the PostgreSQL database is external, you must enable the hstore extension in the automation hub PostgreSQL database manually before installation. If the hstore extension is not enabled before installation, a failure raises during database migration. Procedure Check if the extension is available on the PostgreSQL server (automation hub database). USD psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'" Where the default value for <automation hub database> is automationhub . Example output with hstore available : name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row) Example output with hstore not available : name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows) On a RHEL based server, the hstore extension is included in the postgresql-contrib RPM package, which is not installed automatically when installing the PostgreSQL server RPM package. To install the RPM package, use the following command: dnf install postgresql-contrib Load the hstore PostgreSQL extension into the automation hub database with the following command: USD psql -d <automation hub database> -c "CREATE EXTENSION hstore;" In the following output, the installed_version field lists the hstore extension used, indicating that hstore is enabled. name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row) 1.5.4.4. Optional: enabling mutual TLS (mTLS) authentication mTLS authentication is disabled by default. To configure each component's database with mTLS authentication, add the following variables to your inventory file under the [all:vars] group and ensure each component has a different TLS certificate and key: # Platform gateway gateway_pg_cert_auth=true gateway_pg_tls_cert=/path/to/gateway.cert gateway_pg_tls_key=/path/to/gateway.key gateway_pg_sslmode=verify-full # Automation controller controller_pg_cert_auth=true controller_pg_tls_cert=/path/to/awx.cert controller_pg_tls_key=/path/to/awx.key controller_pg_sslmode=verify-full # Automation hub hub_pg_cert_auth=true hub_pg_tls_cert=/path/to/pulp.cert hub_pg_tls_key=/path/to/pulp.key hub_pg_sslmode=verify-full # Event-Driven Ansible eda_pg_cert_auth=true eda_pg_tls_cert=/path/to/eda.cert eda_pg_tls_key=/path/to/eda.key eda_pg_sslmode=verify-full 1.5.5. Setting registry_username and registry_password When using the registry_username and registry_password variables for an online non-bundled installation, you need to create a new registry service account. Registry service accounts are named tokens that can be used in environments where credentials will be shared, such as deployment systems. Procedure Go to https://access.redhat.com/terms-based-registry/accounts . On the Registry Service Accounts page click New Service Account . Enter a name for the account using only the allowed characters. Optionally enter a description for the account. Click Create . Find the created account in the list by searching for your name in the search field. Click the name of the account that you created. Alternatively, if you know the name of your token, you can go directly to the page by entering the URL: https://access.redhat.com/terms-based-registry/token/<name-of-your-token> A token page opens, displaying a generated username (different from the account name) and a token. If no token is displayed, click Regenerate Token . You can also click this to generate a new username and token. Copy the username (for example "1234567|testuser") and use it to set the variable registry_username . Copy the token and use it to set the variable registry_password . 1.5.6. Using custom TLS certificates By default, the installation program generates self-signed TLS certificates and keys for all Ansible Automation Platform services. If you want to replace these with your own custom certificate and key, then set the following inventory file variables: ca_tls_cert=<path_to_ca_tls_certificate> ca_tls_key=<path_to_ca_tls_key> If you want to use your own TLS certificates and keys for each service (for example automation controller, automation hub, Event-Driven Ansible), then set the following inventory file variables: # Platform gateway gateway_tls_cert=<path_to_tls_certificate> gateway_tls_key=<path_to_tls_key> gateway_pg_tls_cert=<path_to_tls_certificate> gateway_pg_tls_key=<path_to_tls_key> gateway_redis_tls_cert=<path_to_tls_certificate> gateway_redis_tls_key=<path_to_tls_key> # Automation controller controller_tls_cert=<path_to_tls_certificate> controller_tls_key=<path_to_tls_key> controller_pg_tls_cert=<path_to_tls_certificate> controller_pg_tls_key=<path_to_tls_key> # Automation hub hub_tls_cert=<path_to_tls_certificate> hub_tls_key=<path_to_tls_key> hub_pg_tls_cert=<path_to_tls_certificate> hub_pg_tls_key=<path_to_tls_key> # Event-Driven Ansible eda_tls_cert=<path_to_tls_certificate> eda_tls_key=<path_to_tls_key> eda_pg_tls_cert=<path_to_tls_certificate> eda_pg_tls_key=<path_to_tls_key> eda_redis_tls_cert=<path_to_tls_certificate> eda_redis_tls_key=<path_to_tls_key> # PostgreSQL postgresql_tls_cert=<path_to_tls_certificate> postgresql_tls_key=<path_to_tls_key> # Receptor receptor_tls_cert=<path_to_tls_certificate> receptor_tls_key=<path_to_tls_key> If any of your certificates are signed by a custom Certificate Authority (CA), then you must specify the Certificate Authority's certificate by using the custom_ca_cert inventory file variable. If you have more than one custom CA certificate, combine them into a single file, then reference the combined certificate with the custom_ca_cert inventory file variable. custom_ca_cert=<path_to_custom_ca_certificate> 1.5.7. Using custom Receptor signing keys Receptor signing is enabled by default unless receptor_disable_signing=true is set, and an RSA key pair (public and private) is generated by the installation program. However, you can set custom RSA public and private keys by using the following variables: receptor_signing_private_key=<full_path_to_private_key> receptor_signing_public_key=<full_path_to_public_key> 1.5.8. Enabling automation content collection and container signing Automation content signing is disabled by default. To enable it, the following installation variables are required in the inventory file: # Collection signing hub_collection_signing=true hub_collection_signing_key=<full_path_to_collection_gpg_key> # Container signing hub_container_signing=true hub_container_signing_key=<full_path_to_container_gpg_key> The following variables are required if the keys are protected by a passphrase: # Collection signing hub_collection_signing_pass=<gpg_key_passphrase> # Container signing hub_container_signing_pass=<gpg_key_passphrase> The hub_collection_signing_key and hub_container_signing_key variables require the set up of keys before running an installation. Automation content signing currently only supports GnuPG (GPG) based signature keys. For more information about GPG, see the GnuPG man page . Note The algorithm and cipher used is the responsibility of the customer. Procedure On a RHEL9 server run the following command to create a new key pair for collection signing: gpg --gen-key Enter your information for "Real name" and "Email address": Example output: gpg --gen-key gpg (GnuPG) 2.3.3;
[ "sudo hostnamectl set-hostname <your_hostname>", "sudo subscription-manager register", "sudo dnf repolist Updating Subscription Management repositories. repo id repo name rhel-9-for-x86_64-appstream-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-baseos-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)", "sudo dnf install -y ansible-core", "sudo dnf install -y wget git-core rsync vim", "tar xfvz ansible-automation-platform-containerized-setup-<version>.tar.gz", "tar xfvz ansible-automation-platform-containerized-setup-bundle-<version>-<arch_name>.tar.gz", "This is the Ansible Automation Platform installer inventory file intended for the container growth deployment topology. This inventory file expects to be run from the host where Ansible Automation Platform will be installed. Consult the Ansible Automation Platform product documentation about this topology's tested hardware configuration. https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/container-topologies # Consult the docs if you are unsure what to add For all optional variables consult the included README.md or the Ansible Automation Platform documentation: https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation This section is for your platform gateway hosts ----------------------------------------------------- [automationgateway] aap.example.org This section is for your automation controller hosts ------------------------------------------------- [automationcontroller] aap.example.org This section is for your automation hub hosts ----------------------------------------------------- [automationhub] aap.example.org This section is for your Event-Driven Ansible controller hosts ----------------------------------------------------- [automationeda] aap.example.org This section is for the Ansible Automation Platform database -------------------------------------- [database] aap.example.org Ansible ansible_connection=local Common variables https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-general-inventory-variables ----------------------------------------------------- postgresql_admin_username=postgres postgresql_admin_password=<set your own> registry_username=<your RHN username> registry_password=<your RHN password> redis_mode=standalone Platform gateway https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-gateway-variables ----------------------------------------------------- gateway_admin_password=<set your own> gateway_pg_host=aap.example.org gateway_pg_password=<set your own> Automation controller https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-controller-variables ----------------------------------------------------- controller_admin_password=<set your own> controller_pg_host=aap.example.org controller_pg_password=<set your own> Automation hub https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-hub-variables ----------------------------------------------------- hub_admin_password=<set your own> hub_pg_host=aap.example.org hub_pg_password=<set your own> hub_seed_collections=false Event-Driven Ansible controller https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#event-driven-ansible-controller ----------------------------------------------------- eda_admin_password=<set your own> eda_pg_host=aap.example.org eda_pg_password=<set your own>", "This is the Ansible Automation Platform enterprise installer inventory file Consult the docs if you are unsure what to add For all optional variables consult the included README.md or the Red Hat documentation: https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation This section is for your platform gateway hosts ----------------------------------------------------- [automationgateway] gateway1.example.org gateway2.example.org This section is for your automation controller hosts ----------------------------------------------------- [automationcontroller] controller1.example.org controller2.example.org This section is for your Ansible Automation Platform execution hosts ----------------------------------------------------- [execution_nodes] hop1.example.org receptor_type='hop' exec1.example.org exec2.example.org This section is for your automation hub hosts ----------------------------------------------------- [automationhub] hub1.example.org hub2.example.org This section is for your Event-Driven Ansible controller hosts ----------------------------------------------------- [automationeda] eda1.example.org eda2.example.org [redis] gateway1.example.org gateway2.example.org hub1.example.org hub2.example.org eda1.example.org eda2.example.org Common variables https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-general-inventory-variables ----------------------------------------------------- postgresql_admin_username=<set your own> postgresql_admin_password=<set your own> registry_username=<your RHN username> registry_password=<your RHN password> Platform gateway https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-gateway-variables ----------------------------------------------------- gateway_admin_password=<set your own> gateway_pg_host=externaldb.example.org gateway_pg_database=<set your own> gateway_pg_username=<set your own> gateway_pg_password=<set your own> Automation controller https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-controller-variables ----------------------------------------------------- controller_admin_password=<set your own> controller_pg_host=externaldb.example.org controller_pg_database=<set your own> controller_pg_username=<set your own> controller_pg_password=<set your own> Automation hub https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-hub-variables ----------------------------------------------------- hub_admin_password=<set your own> hub_pg_host=externaldb.example.org hub_pg_database=<set your own> hub_pg_username=<set your own> hub_pg_password=<set your own> Event-Driven Ansible controller https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#event-driven-ansible-controller ----------------------------------------------------- eda_admin_password=<set your own> eda_pg_host=externaldb.example.org eda_pg_database=<set your own> eda_pg_username=<set your own> eda_pg_password=<set your own>", "bundle_install=true The bundle directory must include /bundle in the path bundle_dir=<full path to the bundle directory>", "custom_ca_cert=<path_to_cert_crt> gateway_main_url=<https://load_balancer_url>", "hub_shared_data_path=<path_to_nfs_share>", "hub_s3_extra_settings: AWS_S3_MAX_MEMORY_SIZE: 4096 AWS_S3_REGION_NAME: eu-central-1 AWS_S3_USE_SSL: True", "hub_azure_extra_settings: AZURE_LOCATION: foo AZURE_SSL: True AZURE_URL_EXPIRATION_SECS: 60", "controller_license_file=<full_path_to_your_manifest_zip_file>", "postgresql_admin_username=<set your own> postgresql_admin_password=<set your own>", "psql -h <hostname> -U <username> -p <port_number>", "psql -h db.example.com -U superuser -p 5432", "CREATE USER <username> WITH PASSWORD <password> CREATEDB;", "CREATE USER hub_user WITH PASSWORD <password> CREATEDB;", "CREATE DATABASE <database_name> OWNER <username>;", "CREATE DATABASE hub_database OWNER hub_user;", "Platform gateway gateway_pg_host=aap.example.org gateway_pg_database=<set your own> gateway_pg_username=<set your own> gateway_pg_password=<set your own> Automation controller controller_pg_host=aap.example.org controller_pg_database=<set your own> controller_pg_username=<set your own> controller_pg_password=<set your own> Automation hub hub_pg_host=aap.example.org hub_pg_database=<set your own> hub_pg_username=<set your own> hub_pg_password=<set your own> Event-Driven Ansible eda_pg_host=aap.example.org eda_pg_database=<set your own> eda_pg_username=<set your own> eda_pg_password=<set your own>", "psql -d <automation hub database> -c \"SELECT * FROM pg_available_extensions WHERE name='hstore'\"", "name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)", "name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)", "dnf install postgresql-contrib", "psql -d <automation hub database> -c \"CREATE EXTENSION hstore;\"", "name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)", "Platform gateway gateway_pg_cert_auth=true gateway_pg_tls_cert=/path/to/gateway.cert gateway_pg_tls_key=/path/to/gateway.key gateway_pg_sslmode=verify-full Automation controller controller_pg_cert_auth=true controller_pg_tls_cert=/path/to/awx.cert controller_pg_tls_key=/path/to/awx.key controller_pg_sslmode=verify-full Automation hub hub_pg_cert_auth=true hub_pg_tls_cert=/path/to/pulp.cert hub_pg_tls_key=/path/to/pulp.key hub_pg_sslmode=verify-full Event-Driven Ansible eda_pg_cert_auth=true eda_pg_tls_cert=/path/to/eda.cert eda_pg_tls_key=/path/to/eda.key eda_pg_sslmode=verify-full", "https://access.redhat.com/terms-based-registry/token/<name-of-your-token>", "ca_tls_cert=<path_to_ca_tls_certificate> ca_tls_key=<path_to_ca_tls_key>", "Platform gateway gateway_tls_cert=<path_to_tls_certificate> gateway_tls_key=<path_to_tls_key> gateway_pg_tls_cert=<path_to_tls_certificate> gateway_pg_tls_key=<path_to_tls_key> gateway_redis_tls_cert=<path_to_tls_certificate> gateway_redis_tls_key=<path_to_tls_key> Automation controller controller_tls_cert=<path_to_tls_certificate> controller_tls_key=<path_to_tls_key> controller_pg_tls_cert=<path_to_tls_certificate> controller_pg_tls_key=<path_to_tls_key> Automation hub hub_tls_cert=<path_to_tls_certificate> hub_tls_key=<path_to_tls_key> hub_pg_tls_cert=<path_to_tls_certificate> hub_pg_tls_key=<path_to_tls_key> Event-Driven Ansible eda_tls_cert=<path_to_tls_certificate> eda_tls_key=<path_to_tls_key> eda_pg_tls_cert=<path_to_tls_certificate> eda_pg_tls_key=<path_to_tls_key> eda_redis_tls_cert=<path_to_tls_certificate> eda_redis_tls_key=<path_to_tls_key> PostgreSQL postgresql_tls_cert=<path_to_tls_certificate> postgresql_tls_key=<path_to_tls_key> Receptor receptor_tls_cert=<path_to_tls_certificate> receptor_tls_key=<path_to_tls_key>", "custom_ca_cert=<path_to_custom_ca_certificate>", "receptor_signing_private_key=<full_path_to_private_key> receptor_signing_public_key=<full_path_to_public_key>", "Collection signing hub_collection_signing=true hub_collection_signing_key=<full_path_to_collection_gpg_key> Container signing hub_container_signing=true hub_container_signing_key=<full_path_to_container_gpg_key>", "Collection signing hub_collection_signing_pass=<gpg_key_passphrase> Container signing hub_container_signing_pass=<gpg_key_passphrase>", "gpg --gen-key", "gpg --gen-key gpg (GnuPG) 2.3.3; Copyright (C) 2021 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Note: Use \"gpg --full-generate-key\" for a full featured key generation dialog. GnuPG needs to construct a user ID to identify your key. Real name: Joe Bloggs Email address: [email protected] You selected this USER-ID: \"Joe Bloggs <[email protected]>\" Change (N)ame, (E)mail, or (O)kay/(Q)uit? O", "We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. gpg: key 022E4FBFB650F1C4 marked as ultimately trusted gpg: revocation certificate stored as '/home/aapuser/.gnupg/openpgp-revocs.d/F001B037976969DD3E17A829022E4FBFB650F1C4.rev' public and secret key created and signed. pub rsa3072 2024-10-25 [SC] [expires: 2026-10-25] F001B037976969DD3E17A829022E4FBFB650F1C4 uid Joe Bloggs <[email protected]> sub rsa3072 2024-10-25 [E] [expires: 2026-10-25]", "gpg --list-secret-keys --keyid-format=long", "gpg --export -a --output collection-signing-key.pub <email_address_used_to_generate_key>", "gpg -a --export-secret-keys <email_address_used_to_generate_key> > collection-signing-key.priv", "cat collection-signing-key.priv", "-----BEGIN PGP PRIVATE KEY BLOCK----- lQWFBGcbN14BDADTg5BsZGbSGMHypUJMuzmIffzzz4LULrZA8L/I616lzpBHJvEs sSN6KuKY1TcIwIDCCa/U5Obm46kurpP2Y+vNA1YSEtMJoSeHeamWMDd99f49ItBp <snippet> j920hRy/3wJGRDBMFa4mlQg= =uYEF -----END PGP PRIVATE KEY BLOCK-----", "Collection signing hub_collection_signing=true hub_collection_signing_key=/home/aapuser/aap/ansible-automation-platform-containerized-setup-2.5-2/collection-signing-key.priv This variable is required if the key is protected by a passphrase hub_collection_signing_pass=<password> Container signing hub_container_signing=true hub_container_signing_key=/home/aapuser/aap/ansible-automation-platform-containerized-setup-2.5-2/container-signing-key.priv This variable is required if the key is protected by a passphrase hub_container_signing_pass=<password>", "[execution_nodes] <fqdn_of_your_execution_host>", "receptor_port=27199 receptor_protocol=tcp receptor_type=hop", "[execution_nodes] fqdn_of_your_execution_host fqdn_of_your_hop_host receptor_type=hop receptor_peers='[\"<fqdn_of_your_execution_host>\"]'", "mkdir -p ./group_vars/automationeda", "eda_safe_plugins: ['ansible.eda.webhook', 'ansible.eda.alertmanager']", "ansible-playbook -i <inventory_file_name> ansible.containerized_installer.install", "ansible-playbook -i inventory ansible.containerized_installer.install", "ansible-playbook -i <inventory_file_name> -e @<vault_file_name> --ask-vault-pass -K -v ansible.containerized_installer.install", "ansible-playbook -i inventory -e @vault.yml --ask-vault-pass -K -v ansible.containerized_installer.install", "https://<gateway_node>:443", "envoy_http_port=80 envoy_https_port=443", "envoy_disable_https: true", "tar xfvz ansible-automation-platform-containerized-setup-<version>.tar.gz", "tar xfvz ansible-automation-platform-containerized-setup-bundle-<version>-<arch name>.tar.gz", "ansible-playbook -i inventory ansible.containerized_installer.install", "ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backup", "ansible-playbook -i <path_to_inventory> ansible.containerized_installer.restore", "podman secret inspect --showsecret <secret_key_variable> | jq -r .[].SecretData", "podman secret inspect --showsecret controller_secret_key | jq -r .[].SecretData", "ansible-playbook -i inventory ansible.containerized_installer.uninstall", "ansible-playbook -i inventory ansible.containerized_installer.uninstall -e container_keep_images=true", "ansible-playbook -i inventory ansible.containerized_installer.uninstall -e postgresql_keep_databases=true", "ansible-playbook -i inventory ansible.containerized_installer.install -e controller_secret_key=<secret_key_value>" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/ansible_automation_platform_containerized_installation
12.4. Storing Certificates in NSS Databases
12.4. Storing Certificates in NSS Databases By default, certmonger uses .pem files to store the key and the certificate. To store the key and the certificate in an NSS database, specify the -d and -n with the command you use for requesting the certificate. -d sets the security database location -n gives the certificate nickname which is used for the certificate in the NSS database Note The -d and -n options are used instead of the -f and -k options that give the .pem file. For example: Requesting a certificate using ipa-getcert and local-getcert allows you to specify another two options: -F gives the file where the certificate of the CA is to be stored. -a gives the location of the NSS database where the certificate of the CA is to be stored. Note If you request a certificate using selfsign-getcert , there is no need to specify the -F and -a options because generating a self-signed certificate does not involve any CA. Supplying the -F option, the -a option, or both with local-getcert allows you to obtain a copy of the CA certificate that is required in order to verify a certificate issued by the local signer. For example:
[ "selfsign-getcert request -d /export/alias -n ServerCert", "local-getcert request -F /etc/httpd/conf/ssl.crt/ca.crt -n ServerCert -f /etc/httpd/conf/ssl.crt/server.crt -k /etc/httpd/conf/ssl.key/server.key" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/working_with_certmonger-using_certmonger_with_nss
Chapter 3. Using the OpenShift Container Platform dashboard to get cluster information
Chapter 3. Using the OpenShift Container Platform dashboard to get cluster information The OpenShift Container Platform web console captures high-level information about the cluster. 3.1. About the OpenShift Container Platform dashboards page Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by navigating to Home Overview from the OpenShift Container Platform web console. The OpenShift Container Platform dashboard provides various cluster information, captured in individual dashboard cards. The OpenShift Container Platform dashboard consists of the following cards: Details provides a brief overview of informational cluster details. Status include ok , error , warning , in progress , and unknown . Resources can add custom status names. Cluster ID Provider Version Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about: Number of nodes Number of pods Persistent storage volume claims Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment). Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment) Status helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage). Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption, including information about: CPU time Memory allocation Storage consumed Network resources consumed Pod count Activity lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host. 3.2. Recognizing resource and project limits and quotas You can view a graphical representation of available resources in the Topology view of the web console Developer perspective. If a resource has a message about resource limitations or quotas being reached, a yellow border appears around the resource name. Click the resource to open a side panel to see the message. If the Topology view has been zoomed out, a yellow dot indicates that a message is available. If you are using List View from the View Shortcuts menu, resources appear as a list. The Alerts column indicates if a message is available.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/web_console/using-dashboard-to-get-cluster-info
Chapter 5. Migrating applications secured by Red Hat Single Sign-On 7.6
Chapter 5. Migrating applications secured by Red Hat Single Sign-On 7.6 Red Hat build of Keycloak introduces key changes to how applications are using some of the Red Hat Single Sign-On 7.6 Client Adapters. In addition to no longer releasing some client adapters, Red Hat build of Keycloak also introduces fixes and improvements that impact how client applications use OpenID Connect and SAML protocols. In this chapter, you will find the instructions to address these changes and migrate your application to integrate with Red Hat build of Keycloak . 5.1. Migrating OpenID Connect Clients The following Java Client OpenID Connect Adapters are no longer released starting with this release of Red Hat build of Keycloak Red Hat JBoss Enterprise Application Platform 6.x Red Hat JBoss Enterprise Application Platform 7.x Spring Boot Red Hat Fuse Compared to when these adapters were first released, OpenID Connect is now widely available across the Java Ecosystem. Also, much better interoperability and support is achieved by using the capabilities available from the technology stack, such as your application server or framework. These adapters have reached their end of life and are only available from Red Hat Single Sign-On 7.6. It is highly recommended to look for alternatives to keep your applications updated with the latest updates from OAuth2 and OpenID connect protocols. 5.1.1. Key changes in OpenID Connect protocol and client settings 5.1.1.1. Access Type client option no longer available When you create or update an OpenID Connect client, Access Type is no longer available. However, you can use other methods to achieve this capability. To achieve the Bearer Only capability, create a client with no authentication flow. In the Capability config section of the client details, make sure that no flow is selected. The client cannot obtain any tokens from Keycloak, which is equivalent to using the Bearer Only access type. To achieve the Public capability, make sure that client authentication is disabled for this client and at least one flow is enabled. To achieve Confidential capability, make sure that Client Authentication is enabled for the client and at least one flow is enabled. The boolean flags bearerOnly and publicClient still exist on the client JSON object. They can be used when creating or updating a client by the admin REST API or when importing this client by partial import or realm import. However, these options are not directly available in the Admin Console v2. 5.1.1.2. Changes in validating schemes for valid redirect URIs If an application client is using non http(s) custom schemes, the validation now requires that a valid redirect pattern explicitly allows that scheme. Example patterns for allowing custom scheme are custom:/test, custom:/test/* or custom:. For security reasons, a general pattern such as * no longer covers them. 5.1.1.3. Support for the client_id parameter in OpenID Connect Logout Endpoint Support for the client_id parameter, which is based on the OIDC RP-Initiated Logout 1.0 specification. This capability is useful to detect what client should be used for Post Logout Redirect URI verification in case that id_token_hint parameter cannot be used. The logout confirmation screen still needs to be displayed to the user when only the client_id parameter is used without parameter id_token_hint , so clients are encouraged to use id_token_hint parameter if they do not want the logout confirmation screen to be displayed to the user. 5.1.2. Valid Post Logout Redirect URIs The Valid Post Logout Redirect URIs configuration option is added to the OIDC client and is aligned with the OIDC specification. You can use a different set of redirect URIs for redirection after login and logout. The value + used for Valid Post Logout Redirect URIs means that the logout uses the same set of redirect URIs as specified by the option of Valid Redirect URIs . This change also matches the default behavior when migrating from a version due to backwards compatibility. 5.1.2.1. UserInfo Endpoint Changes 5.1.2.1.1. Error response changes The UserInfo endpoint is now returning error responses fully compliant with RFC 6750 (The OAuth 2.0 Authorization Framework: Bearer Token Usage). Error code and description (if available) are provided as WWW-Authenticate challenge attributes rather than JSON object fields. The responses will be the following, depending on the error condition: In case no access token is provided: 401 Unauthorized WWW-Authenticate: Bearer realm="myrealm" In case several methods are used simultaneously to provide an access token (for example, Authorization header + POST access_token parameter), or POST parameters are duplicated: 400 Bad Request WWW-Authenticate: Bearer realm="myrealm", error="invalid_request", error_description="..." In case an access token is missing openid scope: 403 Forbidden WWW-Authenticate: Bearer realm="myrealm", error="insufficient_scope", error_description="Missing openid scope" In case of inability to resolve cryptographic keys for UserInfo response signing/encryption: 500 Internal Server Error In case of a token validation error, a 401 Unauthorized is returned in combination with the invalid_token error code. This error includes user and client related checks and actually captures all the remaining error cases: 401 Unauthorized WWW-Authenticate: Bearer realm="myrealm", error="invalid_token", error_description="..." 5.1.2.1.2. Other Changes to the UserInfo endpoint It is now required for access tokens to have the openid scope, which is stipulated by UserInfo being a feature specific to OpenID Connect and not OAuth 2.0. If the openid scope is missing from the token, the request will be denied as 403 Forbidden . See the preceding section. UserInfo now checks the user status, and returns the invalid_token response if the user is disabled. 5.1.2.1.3. Change of the default Client ID mapper of Service Account Client. Default Client ID mapper of Service Account Client has been changed. Token Claim Name field value has been changed from clientId to client_id . client_id claim is compliant with OAuth2 specifications: JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens OAuth 2.0 Token Introspection OAuth 2.0 Token Exchange clientId userSession note still exists. 5.1.2.1.4. Added iss parameter to OAuth 2.0/OpenID Connect Authentication Response RFC 9207 OAuth 2.0 Authorization Server Issuer Identification specification adds the parameter iss in the OAuth 2.0/OpenID Connect Authentication Response for realizing secure authorization responses. In past releases, we did not have this parameter, but now Red Hat build of Keycloak adds this parameter by default, as required by the specification. However, some OpenID Connect / OAuth2 adapters, and especially older Red Hat build of Keycloak adapters, may have issues with this new parameter. For example, the parameter will be always present in the browser URL after successful authentication to the client application. In these cases, it may be useful to disable adding the iss parameter to the authentication response. This can be done for the particular client in the Admin Console, in client details in the section with OpenID Connect Compatibility Modes . You can enable Exclude Issuer From Authentication Response to prevent adding the iss parameter to the authentication response. 5.2. Migrating Red Hat JBoss Enterprise Application Platform applications 5.2.1. Red Hat JBoss Enterprise Application Platform 8.x Your applications no longer need any additional dependency to integrate with Red Hat build of Keycloak or any other OpenID Provider. Instead, you can leverage the OpenID Connect support from the JBoss EAP native OpenID Connect Client. For more information, take a look at OpenID Connect in JBoss EAP . The JBoss EAP native adapter relies on a configuration schema very similar to the Red Hat build of Keycloak Adapter JSON Configuration. For instance, a deployment using a keycloak.json configuration file can be mapped to the following configuration in JBoss EAP: { "realm": "quickstart", "auth-server-url": "http://localhost:8180", "ssl-required": "external", "resource": "jakarta-servlet-authz-client", "credentials": { "secret": "secret" } } For examples about integrating Jakarta-based applications using the JBoss EAP native adapter with Red Hat build of Keycloak, see the following examples at the Red Hat build of Keycloak Quickstart Repository: JAX-RS Resource Server Servlet Application It is strongly recommended to migrate to JBoss EAP native OpenID Connect client as it is the best candidate for Jakarta applications deployed to JBoss EAP 8 and newer. 5.2.2. Red Hat JBoss Enterprise Application Platform 7.x As Red Hat JBoss Enterprise Application Platform 7.x is close to ending full support, Red Hat build of Keycloak will not provide support for it. For existing applications deployed to Red Hat JBoss Enterprise Application Platform 7.x adapters with maintenance support are available through Red Hat Single Sign-On 7.6. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 26.0 server. 5.2.3. Red Hat JBoss Enterprise Application Platform 6.x As Red Hat JBoss Enterprise Application PlatformJBoss EAP 6.x has reached end of maintenance support, going forward neither Red Hat Single Sign-On 7.6 or Red Hat build of Keycloak will provide support for it. 5.3. Migrating Spring Boot applications The Spring Framework ecosystem is evolving fast and you should have a much better experience by leveraging the OpenID Connect support already available there. Your applications no longer need any additional dependency to integrate with Red Hat build of Keycloak or any other OpenID Provider but rely on the comprehensive OAuth2/OpenID Connect support from Spring Security. For more information, see OAuth2/OpenID Connect support from Spring Security . In terms of capabilities, it provides a standard-based OpenID Connect client implementation. An example of a capability that you might want to review, if not already using the standard protocols, is Logout . Red Hat build of Keycloak provides full support for standard-based logout protocols from the OpenID Connect ecosystem. For examples of how to integrate Spring Security applications with Red Hat build of Keycloak, see the Quickstart Repository . If migrating from the Red Hat build of Keycloak Client Adapter for Spring Boot is not an option, you still have access to the adapter from Red Hat Single Sign-On 7.6, which is now in maintenance only support. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 26.0 server. 5.4. Migrating Red Hat Fuse applications As Red Hat Fuse has reached the end of full support, Red Hat build of Keycloak 26.0 will not provide any support for it. Red Hat Fuse adapters are still available with maintenance support through Red Hat Single Sign-On 7.6. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 26.0 server. 5.5. Migrating Applications Using the Authorization Services Policy Enforcer To support integration with the Red Hat build of Keycloak Authorization Services, the policy enforcer is available separately from the Java Client Adapters. <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-policy-enforcer</artifactId> <version>USD{Red Hat build of Keycloak .version}</version> </dependency> By decoupling it from the Java Client Adapters, it is possible now to integrate Red Hat build of Keycloak to any Java technology that provides built-in support for OAuth2 or OpenID Connect. The Red Hat build of Keycloak Policy Enforcer provides built-in support for the following types of applications: Servlet Application Using Fine-grained Authorization Spring Boot REST Service Protected Using Red Hat build of Keycloak Authorization Services For integration of the Red Hat build of Keycloak Policy Enforcer with different types of applications, consider the following examples: Servlet Application Using Fine-grained Authorization Spring Boot REST Service Protected Using Keycloak Authorization Services If migrating from the Red Hat Single Sign-On 7.6 Java Adapter you are using is not an option, you still have access to the adapter from Red Hat Single Sign-On 7.6, which is now in maintenance support. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 26.0 server. Additional resources Policy enforcers 5.6. Migrating Single Page Applications (SPA) using the Red Hat build of Keycloak JS Adapter To migrate applications secured with the Red Hat Single Sign-On 7.6 adapter, upgrade to Red Hat build of Keycloak 26.0, which provides a more recent version of the adapter. Depending on how it is used, there are some minor changes needed, which are described below. 5.6.1. Legacy Promise API removed With this release, the legacy Promise API methods from the Red Hat build of Keycloak JS adapter is removed. This means that calling .success() and .error() on promises returned from the adapter is no longer possible. 5.6.2. Required to be instantiated with the new operator In a release, deprecation warnings were logged when the Red Hat build of Keycloak JS adapter is constructed without the new operator. Starting with this release, doing so will throw an exception instead. This change is to align with the expected behavior of JavaScript classes , which will allow further refactoring of the adapter in the future. To migrate applications secured with the Red Hat Single Sign-On 7.6 adapter, upgrade to Red Hat build of Keycloak 26.0, which provides a more recent version of the adapter. 5.7. Migrating SAML applications 5.7.1. Migrating Red Hat JBoss Enterprise Application Platform applications 5.7.1.1. Red Hat JBoss Enterprise Application Platform 8.x Red Hat build of Keycloak 26.0 includes client adapters for Red Hat JBoss Enterprise Application Platform 8.x, including support for Jakarta EE. 5.7.1.2. Red Hat JBoss Enterprise Application Platform 7.x As Red Hat JBoss Enterprise Application Platform 7.x is close to ending full support, Red Hat build of Keycloak will not provide support for it. For existing applications deployed to Red Hat JBoss Enterprise Application Platform 7.x adapters with maintenance support are available through Red Hat Single Sign-On 7.6. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 26.0 server. 5.7.1.3. Red Hat JBoss Enterprise Application Platform 6.x As Red Hat JBoss Enterprise Application PlatformJBoss EAP 6.x has reached end of maintenance support, going forward neither Red Hat Single Sign-On 7.6 or Red Hat build of Keycloak will provide support for it.. 5.7.2. Key changes in SAML protocol and client settings 5.7.2.1. SAML SP metadata changes Prior to this release, SAML SP metadata contained the same key for both signing and encryption use. Starting with this version of Keycloak, we include only encryption intended realm keys for encryption use in SP metadata. For each encryption key descriptor we also specify the algorithm that it is supposed to be used with. The following table shows the supported XML-Enc algorithms with the mapping to Red Hat build of Keycloak realm keys. XML-Enc algorithm Realm key algorithm rsa-oaep-mgf1p RSA-OAEP rsa-1_5 RSA1_5 Additional resources Keycloak Upgrading Guide 5.7.2.2. Deprecated RSA_SHA1 and DSA_SHA1 algorithms for SAML Algorithms RSA_SHA1 and DSA_SHA1 , which can be configured as Signature algorithms on SAML adapters, clients and identity providers are deprecated. We recommend to use safer alternatives based on SHA256 or SHA512 . Also, verifying signatures on signed SAML documents or assertions with these algorithms do not work on Java 17 or higher. If you use this algorithm and the other party consuming your SAML documents is running on Java 17 or higher, verifying signatures will not work. The possible workaround is to remove algorithms such as the following: http://www.w3.org/2000/09/xmldsig#rsa-sha1 or http://www.w3.org/2000/09/xmldsig#dsa-sha1 from the list "disallowed algorithms" configured on property jdk.xml.dsig.secureValidationPolicy in the file USDJAVA_HOME/conf/security/java.security
[ "401 Unauthorized WWW-Authenticate: Bearer realm=\"myrealm\"", "400 Bad Request WWW-Authenticate: Bearer realm=\"myrealm\", error=\"invalid_request\", error_description=\"...\"", "403 Forbidden WWW-Authenticate: Bearer realm=\"myrealm\", error=\"insufficient_scope\", error_description=\"Missing openid scope\"", "500 Internal Server Error", "401 Unauthorized WWW-Authenticate: Bearer realm=\"myrealm\", error=\"invalid_token\", error_description=\"...\"", "{ \"realm\": \"quickstart\", \"auth-server-url\": \"http://localhost:8180\", \"ssl-required\": \"external\", \"resource\": \"jakarta-servlet-authz-client\", \"credentials\": { \"secret\": \"secret\" } }", "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-policy-enforcer</artifactId> <version>USD{Red Hat build of Keycloak .version}</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/migration_guide/migrating-applications
Chapter 2. ContainerRuntimeConfig [machineconfiguration.openshift.io/v1]
Chapter 2. ContainerRuntimeConfig [machineconfiguration.openshift.io/v1] Description ContainerRuntimeConfig describes a customized Container Runtime configuration. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ContainerRuntimeConfigSpec defines the desired state of ContainerRuntimeConfig status object ContainerRuntimeConfigStatus defines the observed state of a ContainerRuntimeConfig 2.1.1. .spec Description ContainerRuntimeConfigSpec defines the desired state of ContainerRuntimeConfig Type object Required containerRuntimeConfig Property Type Description containerRuntimeConfig object ContainerRuntimeConfiguration defines the tuneables of the container runtime. It's important to note that, since the fields of the ContainerRuntimeConfiguration are directly read by the upstream kubernetes golang client, the validation of those values is handled directly by that golang client which is outside of the controller for ContainerRuntimeConfiguration. Please ensure the valid values are used for those fields as invalid values may render cluster nodes unusable. machineConfigPoolSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. 2.1.2. .spec.containerRuntimeConfig Description ContainerRuntimeConfiguration defines the tuneables of the container runtime. It's important to note that, since the fields of the ContainerRuntimeConfiguration are directly read by the upstream kubernetes golang client, the validation of those values is handled directly by that golang client which is outside of the controller for ContainerRuntimeConfiguration. Please ensure the valid values are used for those fields as invalid values may render cluster nodes unusable. Type object Property Type Description defaultRuntime string defaultRuntime is the name of the OCI runtime to be used as the default. logLevel string logLevel specifies the verbosity of the logs based on the level it is set to. Options are fatal, panic, error, warn, info, and debug. logSizeMax string logSizeMax specifies the Maximum size allowed for the container log file. Negative numbers indicate that no size limit is imposed. If it is positive, it must be >= 8192 to match/exceed conmon's read buffer. overlaySize string overlaySize specifies the maximum size of a container image. This flag can be used to set quota on the size of container images. pidsLimit integer pidsLimit specifies the maximum number of processes allowed in a container 2.1.3. .spec.machineConfigPoolSelector Description A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.4. .spec.machineConfigPoolSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.5. .spec.machineConfigPoolSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.6. .status Description ContainerRuntimeConfigStatus defines the observed state of a ContainerRuntimeConfig Type object Property Type Description conditions array conditions represents the latest available observations of current state. conditions[] object ContainerRuntimeConfigCondition defines the state of the ContainerRuntimeConfig observedGeneration integer observedGeneration represents the generation observed by the controller. 2.1.7. .status.conditions Description conditions represents the latest available observations of current state. Type array 2.1.8. .status.conditions[] Description ContainerRuntimeConfigCondition defines the state of the ContainerRuntimeConfig Type object Property Type Description lastTransitionTime `` lastTransitionTime is the time of the last update to the current status object. message string message provides additional information about the current condition. This is only to be consumed by humans. reason string reason is the reason for the condition's last transition. Reasons are PascalCase status string status of the condition, one of True, False, Unknown. type string type specifies the state of the operator's reconciliation functionality. 2.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs DELETE : delete collection of ContainerRuntimeConfig GET : list objects of kind ContainerRuntimeConfig POST : create a ContainerRuntimeConfig /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs/{name} DELETE : delete a ContainerRuntimeConfig GET : read the specified ContainerRuntimeConfig PATCH : partially update the specified ContainerRuntimeConfig PUT : replace the specified ContainerRuntimeConfig /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs/{name}/status GET : read status of the specified ContainerRuntimeConfig PATCH : partially update status of the specified ContainerRuntimeConfig PUT : replace status of the specified ContainerRuntimeConfig 2.2.1. /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ContainerRuntimeConfig Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ContainerRuntimeConfig Table 2.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a ContainerRuntimeConfig Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body ContainerRuntimeConfig schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 201 - Created ContainerRuntimeConfig schema 202 - Accepted ContainerRuntimeConfig schema 401 - Unauthorized Empty 2.2.2. /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the ContainerRuntimeConfig Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ContainerRuntimeConfig Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ContainerRuntimeConfig Table 2.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.15. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ContainerRuntimeConfig Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.17. Body parameters Parameter Type Description body Patch schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ContainerRuntimeConfig Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body ContainerRuntimeConfig schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 201 - Created ContainerRuntimeConfig schema 401 - Unauthorized Empty 2.2.3. /apis/machineconfiguration.openshift.io/v1/containerruntimeconfigs/{name}/status Table 2.22. Global path parameters Parameter Type Description name string name of the ContainerRuntimeConfig Table 2.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ContainerRuntimeConfig Table 2.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.25. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ContainerRuntimeConfig Table 2.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.27. Body parameters Parameter Type Description body Patch schema Table 2.28. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ContainerRuntimeConfig Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body ContainerRuntimeConfig schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK ContainerRuntimeConfig schema 201 - Created ContainerRuntimeConfig schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/machine_apis/containerruntimeconfig-machineconfiguration-openshift-io-v1
Index
Index Symbols .fetchmailrc, Fetchmail Configuration Options global options, Global Options server options, Server Options user options, User Options .procmailrc, Procmail Configuration /boot/ directory, The /boot/ Directory /etc/named.conf (see BIND) /etc/pam.conf, PAM Configuration Files (see also PAM) /etc/pam.d, PAM Configuration Files (see also PAM) /etc/sysconfig/ directory (see sysconfig directory) /lib/security/, PAM Configuration Files (see also PAM) /lib64/security/, PAM Configuration Files (see also PAM) /proc/ directory (see proc file system) A aboot, Boot Loaders for Other Architectures access control, TCP Wrappers and xinetd AccessFileName Apache configuration directive, AccessFileName Action Apache configuration directive, Action AddDescription Apache configuration directive, AddDescription AddEncoding Apache configuration directive, AddEncoding AddHandler Apache configuration directive, AddHandler AddIcon Apache configuration directive, AddIcon AddIconByEncoding Apache configuration directive, AddIconByEncoding AddIconByType Apache configuration directive, AddIconByType AddLanguage Apache configuration directive, AddLanguage AddType Apache configuration directive, AddType Alias Apache configuration directive, Alias Allow Apache configuration directive, Allow AllowOverride Apache configuration directive, AllowOverride Apache (see Apache HTTP Server) Apache HTTP Server 1.3 migration to 2.0, Migrating Apache HTTP Server 1.3 Configuration Files 2.0 features of, Features of Apache HTTP Server 2.0 file system changes, File System Changes in Apache HTTP Server 2.0 migration from 1.3, Migrating Apache HTTP Server 1.3 Configuration Files MPM specific directives, MPM Specific Server-Pool Directives packaging changes, Packaging Changes in Apache HTTP Server 2.0 additional resources, Additional Resources related books, Related Books useful websites, Useful Websites configuration, Configuration Directives in httpd.conf introducing, Apache HTTP Server log files /var/log/httpd/error_log, Configuration Directives in httpd.conf combined log file format, LogFormat , CustomLog format of, LogFormat troubleshooting with, Configuration Directives in httpd.conf , KeepAlive using log analyzer tools with, HostnameLookups migration to 2.0, Migrating Apache HTTP Server 1.3 Configuration Files bind addresses and ports, Interface and Port Binding content negotiation, Content Negotiation directory indexing, Directory Indexing DSO Support, Dynamic Shared Object (DSO) Support error documents, Error Documents LDAP, The mod_authz_ldap Module logging, Logging module system changes, Modules and Apache HTTP Server 2.0 mod_auth_db, The mod_auth_dbm and mod_auth_db Modules mod_auth_dbm, The mod_auth_dbm and mod_auth_db Modules mod_include, The mod_include Module mod_perl, The mod_perl Module mod_proxy, The mod_proxy Module mod_ssl, The mod_ssl Module PHP, PHP removed directives, Other Global Environment Changes server-pool size, Server-Pool Size Regulation SuexecUserGroup, The suexec Module , SuexecUserGroup UserDir directive, UserDir Mapping virtual host configuration, Virtual Host Configuration Multi-Processing Modules activating worker MPM, Server-Pool Size Regulation prefork, Server-Pool Size Regulation worker, Server-Pool Size Regulation reloading, Starting and Stopping httpd restarting, Starting and Stopping httpd running without security, Virtual Hosts server status reports, Location starting, Starting and Stopping httpd stopping, Starting and Stopping httpd troubleshooting, Configuration Directives in httpd.conf Apache HTTP Server modules, Default Modules APXS Apache utility, Adding Modules Authentication Configuration Tool and LDAP, Configuring a System to Authenticate Using OpenLDAP , PAM and LDAP autofs, autofs (see also NFS) B Basic Input/Output System (see BIOS) Berkeley Internet Name Domain (see BIND) BIND additional resources, Additional Resources installed documentation, Installed Documentation useful websites, Useful Websites common mistakes, Common Mistakes to Avoid configuration files /etc/named.conf, BIND as a Nameserver , /etc/named.conf /var/named/ directory, BIND as a Nameserver zone files, Zone Files configuration of reverse name resolution, Reverse Name Resolution Zone Files zone file directives, Zone File Directives zone file examples, Example Zone File zone file resource records, Zone File Resource Records zone statements sample, Sample zone Statements features, Advanced Features of BIND DNS enhancements, DNS Protocol Enhancements IPv6, IP version 6 multiple views, Multiple Views security, Security introducing, Berkeley Internet Name Domain (BIND) , Introduction to DNS named daemon, BIND as a Nameserver nameserver definition of, Introduction to DNS nameserver types caching-only, Nameserver Types forwarding, Nameserver Types master, Nameserver Types slave, Nameserver Types rndc program, Using rndc /etc/rndc.conf, Configuring /etc/rndc.conf command line options, Command Line Options configuring keys, Configuring /etc/rndc.conf configuring named to use, Configuring /etc/named.conf root nameserver definition of, Introduction to DNS zones definition of, Nameserver Zones bind additional resources related books, Related Books BIOS definition of, The BIOS (see also boot process) block devices, /proc/devices (see also /proc/devices) definition of, /proc/devices boot loaders, GRUB (see also GRUB) definition of, The GRUB Boot Loader types of ELILO, Boot Loaders and System Architecture GRUB, Boot Loaders and System Architecture OS/400, Boot Loaders and System Architecture YABOOT, Boot Loaders and System Architecture z/IPL, Boot Loaders and System Architecture boot process, Boot Process, Init, and Shutdown , A Detailed Look at the Boot Process (see also boot loaders) chain loading, GRUB and the x86 Boot Process direct loading, GRUB and the x86 Boot Process for x86, A Detailed Look at the Boot Process stages of, The Boot Process , A Detailed Look at the Boot Process /sbin/init command, The /sbin/init Program BIOS, The BIOS boot loader, The Boot Loader EFI shell, The BIOS kernel, The Kernel BrowserMatch Apache configuration directive, BrowserMatch C cache directives for Apache, Cache Directives CacheNegotiatedDocs Apache configuration directive, CacheNegotiatedDocs caching-only nameserver (see BIND) CGI scripts allowing execution outside cgi-bin, Directory outside the ScriptAlias, AddHandler channel bonding interface configuration of, Channel Bonding Interfaces module configuration, The Channel Bonding Module module directives, bonding Module Directives character devices, /proc/devices (see also /proc/devices) definition of, /proc/devices chkconfig, Runlevel Utilities (see also services) configuration Apache HTTP Server, Configuration Directives in httpd.conf virtual hosts, Virtual Hosts configuration directives, Apache, General Configuration Tips AccessFileName, AccessFileName Action, Action AddDescription, AddDescription AddEncoding, AddEncoding AddHandler, AddHandler AddIcon, AddIcon AddIconByEncoding, AddIconByEncoding AddIconByType, AddIconByType AddLanguage, AddLanguage AddType, AddType Alias, Alias Allow, Allow AllowOverride, AllowOverride BrowserMatch, BrowserMatch CacheNegotiatedDocs, CacheNegotiatedDocs CustomLog, CustomLog DefaultIcon, DefaultIcon DefaultType, DefaultType Deny, Deny Directory, Directory DirectoryIndex, DirectoryIndex DocumentRoot, DocumentRoot ErrorDocument, ErrorDocument ErrorLog, ErrorLog ExtendedStatus, ExtendedStatus for cache functionality, Cache Directives Group, Group HeaderName, HeaderName HostnameLookups, HostnameLookups IfDefine, IfDefine IfModule, IfModule Include, Include IndexIgnore, IndexIgnore IndexOptions, IndexOptions KeepAlive, KeepAlive (see also KeepAliveTimeout) troubleshooting, KeepAlive KeepAliveTimeout, KeepAliveTimeout LanguagePriority, LanguagePriority Listen, Listen LoadModule, LoadModule Location, Location LogFormat format options, LogFormat LogLevel, LogLevel MaxClients, MaxClients MaxKeepAliveRequests, MaxKeepAliveRequests MaxRequestsPerChild, MaxRequestsPerChild MaxSpareServers, MinSpareServers and MaxSpareServers MaxSpareThreads, MinSpareThreads and MaxSpareThreads MinSpareServers, MinSpareServers and MaxSpareServers MinSpareThreads, MinSpareThreads and MaxSpareThreads NameVirtualHost, NameVirtualHost Options, Options Order, Order PidFile, PidFile Proxy, Proxy ProxyRequests, ProxyRequests ReadmeName, ReadmeName Redirect, Redirect ScriptAlias, ScriptAlias ServerAdmin, ServerAdmin ServerName, ServerName ServerRoot, ServerRoot ServerSignature, ServerSignature SetEnvIf, SetEnvIf SSL configuration, Configuration Directives for SSL StartServers, StartServers SuexecUserGroup, The suexec Module , SuexecUserGroup ThreadsPerChild, ThreadsPerChild Timeout, Timeout TypesConfig, TypesConfig UseCanonicalName, UseCanonicalName User, User UserDir, UserDir VirtualHost, VirtualHost CustomLog Apache configuration directive, CustomLog D DefaultIcon Apache configuration directive, DefaultIcon DefaultType Apache configuration directive, DefaultType Denial of Service prevention using xinetd, Resource Management Options (see also xinetd) Denial of Service attack, /proc/sys/net/ (see also /proc/sys/net/ directory) definition of, /proc/sys/net/ Deny Apache configuration directive, Deny desktop environments (see X) dev directory, The /dev/ Directory devices, local ownership of, PAM and Device Ownership (see also PAM) directories /boot/, The /boot/ Directory /dev/, The /dev/ Directory /etc/, The /etc/ Directory /lib/, The /lib/ Directory /media/, The /media/ Directory /mnt/, The /mnt/ Directory /opt/, The /opt/ Directory /proc/, The /proc/ Directory /sbin/, The /sbin/ Directory /srv/, The /srv/ Directory /sys/, The /sys/ Directory /usr/, The /usr/ Directory /usr/local/, The /usr/local/ Directory /var/, The /var/ Directory Directory Apache configuration directive, Directory DirectoryIndex Apache configuration directive, DirectoryIndex display managers (see X) DNS, Introduction to DNS (see also BIND) introducing, Introduction to DNS documentation experienced user, For the More Experienced finding appropriate, Finding Appropriate Documentation first-time users, Documentation For First-Time Linux Users newsgroups, Introduction to Linux Newsgroups websites, Introduction to Linux Websites guru, Documentation for Linux Gurus DocumentRoot Apache configuration directive, DocumentRoot changing, Virtual Hosts changing shared, The Secure Web Server Virtual Host DoS (see Denial of Service) DoS attack (see Denial of Service attack) drivers (see kernel modules) DSOs loading, Adding Modules E EFI shell definition of, The BIOS (see also boot process) ELILO, Boot Loaders for Other Architectures , Boot Loaders and System Architecture (see also boot loaders) email additional resources, Additional Resources installed documentation, Installed Documentation related books, Related Books useful websites, Useful Websites Fetchmail, Fetchmail history of, Email Postfix, Postfix Procmail, Mail Delivery Agents program classifications, Email Program Classifications protocols, Email Protocols IMAP, IMAP POP, POP SMTP, SMTP security, Securing Communication clients, Secure Email Clients servers, Securing Email Client Communications Sendmail, Sendmail spam filtering out, Spam Filters types Mail Delivery Agent, Mail Delivery Agent Mail Transfer Agent, Mail Transfer Agent Mail User Agent, Mail User Agent epoch, /proc/stat (see also /proc/stat) definition of, /proc/stat ErrorDocument Apache configuration directive, ErrorDocument ErrorLog Apache configuration directive, ErrorLog etc directory, The /etc/ Directory Ethernet (see network) Ethernet modules (see kernel modules) exec-shield enabling, /proc/sys/kernel/ introducing, /proc/sys/kernel/ execution domains, /proc/execdomains (see also /proc/execdomains) definition of, /proc/execdomains ExtendedStatus Apache configuration directive, ExtendedStatus Extensible Firmware Interface shell (see EFI shell) F feedback contact information, We Need Feedback! Fetchmail, Fetchmail additional resources, Additional Resources command options, Fetchmail Command Options informational, Informational or Debugging Options special, Special Options configuration options, Fetchmail Configuration Options global options, Global Options server options, Server Options user options, User Options FHS, Overview of File System Hierarchy Standard (FHS) , FHS Organization (see also file system) file system FHS standard, FHS Organization hierarchy, Overview of File System Hierarchy Standard (FHS) organization, FHS Organization structure, File System Structure virtual (see proc file system) files, proc file system changing, Changing Virtual Files , Using the sysctl Command viewing, Viewing Virtual Files , Using the sysctl Command findsmb program, findsmb forwarding nameserver (see BIND) frame buffer device, /proc/fb (see also /proc/fb) FrontPage, After Installation fstab, /etc/fstab (see also NFS) FTP, FTP (see also vsftpd) active mode, Multiple Ports, Multiple Modes command port, Multiple Ports, Multiple Modes data port, Multiple Ports, Multiple Modes definition of, FTP introducing, The File Transport Protocol passive mode, Multiple Ports, Multiple Modes server software Red Hat Content Accelerator, FTP Servers vsftpd, FTP Servers G GNOME, Desktop Environments (see also X) Group Apache configuration directive, Group groups additional resources, Additional Resources installed documentation, Installed Documentation related books, Related Books GID, Users and Groups introducing, Users and Groups shared directories, Group Directories standard, Standard Groups tools for management of groupadd, User and Group Management Tools , User Private Groups system-config-users, User Private Groups User Manager, User and Group Management Tools user private, User Private Groups GRUB, The Boot Loader , Boot Loaders and System Architecture (see also boot loaders) additional resources, Additional Resources installed documentation, Installed Documentation related books, Related Books useful websites, Useful Websites boot process, GRUB and the x86 Boot Process Changing Runlevels at Boot Time, Changing Runlevels at Boot Time changing runlevels with, GRUB Interfaces commands, GRUB Commands configuration file /boot/grub/grub.conf, Configuration File Structure structure, Configuration File Structure definition of, GRUB features, Features of GRUB installing, Installing GRUB interfaces, GRUB Interfaces command line, GRUB Interfaces menu, GRUB Interfaces menu entry editor, GRUB Interfaces order of, Interfaces Load Order menu configuration file, GRUB Menu Configuration File directives, Configuration File Directives role in boot process, The Boot Loader terminology, GRUB Terminology devices, Device Names files, File Names and Blocklists root file system, The Root File System and GRUB grub.conf, Configuration File Structure (see also GRUB) H halt, Shutting Down (see also shutdown) HeaderName Apache configuration directive, HeaderName hierarchy, file system, Overview of File System Hierarchy Standard (FHS) HostnameLookups Apache configuration directive, HostnameLookups hosts access files (see TCP wrappers) hosts.allow (see TCP wrappers) hosts.deny (see TCP wrappers) httpd.conf (see configuration directives, Apache) hugepages configuration of, /proc/sys/vm/ I IfDefine Apache configuration directive, IfDefine ifdown, Interface Control Scripts IfModule Apache configuration directive, IfModule ifup, Interface Control Scripts Include Apache configuration directive, Include IndexIgnore Apache configuration directive, IndexIgnore IndexOptions Apache configuration directive, IndexOptions init command, The /sbin/init Program (see also boot process) configuration files /etc/inittab, SysV Init Runlevels role in boot process, The /sbin/init Program (see also boot process) runlevels directories for, SysV Init Runlevels runlevels accessed by, Runlevels SysV init definition of, SysV Init Runlevels initrd directory, Special File Locations Under Red Hat Enterprise Linux insmod , Kernel Module Utilities introduction, Introduction ip6tables control scripts panic, iptables Control Scripts restart, iptables Control Scripts save, iptables Control Scripts start, iptables Control Scripts status, iptables Control Scripts stop, iptables Control Scripts introducing, ip6tables and IPv6 ipchains (see iptables) IPsec (see network) iptables /sbin/iptables-restore, Saving iptables Rules /sbin/iptables-save, Saving iptables Rules additional resources, Additional Resources installed documentation, Installed Documentation useful websites, Useful Websites chains target, Packet Filtering compared with ipchains, Differences between iptables and ipchains configuration files /etc/sysconfig/iptables, Saving iptables Rules /etc/sysconfig/iptables-config, iptables Control Scripts Configuration File /etc/sysconfig/iptables.save, Saving iptables Rules control scripts panic, iptables Control Scripts restart, iptables Control Scripts save, Saving iptables Rules , iptables Control Scripts start, iptables Control Scripts status, iptables Control Scripts stop, iptables Control Scripts match options, iptables Match Options modules, Additional Match Option Modules options, Options Used within iptables Commands commands, Command Options listing, Listing Options parameters, iptables Parameter Options structure of, Structure of iptables Options target, Target Options overview of, iptables packet filtering basics, Packet Filtering protocols ICMP, ICMP Protocol TCP, TCP Protocol UDP, UDP Protocol rules list, Packet Filtering saving rules, Saving iptables Rules tables, Packet Filtering K KDE, Desktop Environments (see also X) KeepAlive Apache configuration directive, KeepAlive KeepAliveTimeout Apache configuration directive, KeepAliveTimeout Kerberos additional resources, Additional Resources installed documentation, Installed Documentation useful websites, Useful Websites advantages of, Advantages of Kerberos and PAM, Kerberos and PAM Authentication Server (AS), How Kerberos Works clients set up, Configuring a Kerberos 5 Client definition of, Kerberos disadvantages of, Disadvantages of Kerberos how it works, How Kerberos Works Key Distribution Center (KDC), How Kerberos Works server set up, Configuring a Kerberos 5 Server terminology, Kerberos Terminology Ticket-granting Server (TGS), How Kerberos Works Ticket-granting Ticket (TGT), How Kerberos Works kernel role in boot process, The Kernel kernel modules /etc/rc.modules, Persistent Module Loading Ethernet modules parameters, Ethernet Parameters supporting multiple cards, Using Multiple Ethernet Cards introducing, General Parameters and Modules listing, Kernel Module Utilities loading, Kernel Module Utilities module parameters specifying, Specifying Module Parameters persistent loading, Persistent Module Loading SCSI modules parameters, Storage parameters types of, General Parameters and Modules unload, Kernel Module Utilities kwin, Window Managers (see also X) L LanguagePriority Apache configuration directive, LanguagePriority LDAP additional resources, Additional Resources installed documentation, Installed Documentation related books, Related Books useful websites, Useful Websites advantages of, Why Use LDAP? applications ldapadd, OpenLDAP Daemons and Utilities ldapdelete, OpenLDAP Daemons and Utilities ldapmodify, OpenLDAP Daemons and Utilities ldappasswd, OpenLDAP Daemons and Utilities ldapsearch, OpenLDAP Daemons and Utilities OpenLDAP suite, OpenLDAP Daemons and Utilities slapadd, OpenLDAP Daemons and Utilities slapcat, OpenLDAP Daemons and Utilities slapd, OpenLDAP Daemons and Utilities slapindex, OpenLDAP Daemons and Utilities slappasswd, OpenLDAP Daemons and Utilities slurpd, OpenLDAP Daemons and Utilities utilities, OpenLDAP Daemons and Utilities authentication using, Configuring a System to Authenticate Using OpenLDAP Authentication Configuration Tool, Configuring a System to Authenticate Using OpenLDAP editing /etc/ldap.conf, Configuring a System to Authenticate Using OpenLDAP editing /etc/nsswitch.conf, Configuring a System to Authenticate Using OpenLDAP editing /etc/openldap/ldap.conf, Configuring a System to Authenticate Using OpenLDAP editing slapd.conf, Configuring a System to Authenticate Using OpenLDAP packages, Configuring a System to Authenticate Using OpenLDAP PAM, PAM and LDAP setting up clients, Configuring a System to Authenticate Using OpenLDAP client applications, LDAP Client Applications configuration files /etc/ldap.conf, OpenLDAP Configuration Files /etc/openldap/ldap.conf, OpenLDAP Configuration Files /etc/openldap/schema/ directory, OpenLDAP Configuration Files , The /etc/openldap/schema/ Directory /etc/openldap/slapd.conf, OpenLDAP Configuration Files , Editing /etc/openldap/slapd.conf daemons, OpenLDAP Daemons and Utilities definition of, Lightweight Directory Access Protocol (LDAP) LDAPv2, Lightweight Directory Access Protocol (LDAP) LDAPv3, Lightweight Directory Access Protocol (LDAP) LDIF format of, LDAP Terminology OpenLDAP features, OpenLDAP Features setting up, OpenLDAP Setup Overview migrating older directories, Migrating Directories from Earlier Releases terminology, LDAP Terminology upgrading directories, Migrating Directories from Earlier Releases using with Apache HTTP Server, PHP4, LDAP, and the Apache HTTP Server using with NSS, NSS, PAM, and LDAP using with PAM, NSS, PAM, and LDAP using with PHP4, PHP4, LDAP, and the Apache HTTP Server ldapadd command, OpenLDAP Daemons and Utilities (see also LDAP) ldapdelete command, OpenLDAP Daemons and Utilities (see also LDAP) ldapmodify command, OpenLDAP Daemons and Utilities (see also LDAP) ldappasswd command, OpenLDAP Daemons and Utilities (see also LDAP) ldapsearch command, OpenLDAP Daemons and Utilities (see also LDAP) lib directory, The /lib/ Directory Lightweight Directory Access Protocol (see LDAP) LILO, The Boot Loader (see also boot loaders) role in boot process, The Boot Loader Listen Apache configuration directive, Listen LoadModule Apache configuration directive, LoadModule Location Apache configuration directive, Location LogFormat Apache configuration directive, LogFormat LogLevel Apache configuration directive, LogLevel lsmod , Kernel Module Utilities lspci, /proc/pci M Mail Delivery Agent (see email) Mail Transfer Agent (see email) Mail User Agent (see email) make_smbcodepage program, make_smbcodepage make_unicodemap program, make_unicodemap Master Boot Record (see MBR) master nameserver (see BIND) MaxClients Apache configuration directive, MaxClients MaxKeepAliveRequests Apache configuration directive, MaxKeepAliveRequests MaxRequestsPerChild Apache configuration directive, MaxRequestsPerChild MaxSpareServers Apache configuration directive, MinSpareServers and MaxSpareServers MaxSpareThreads Apache configuration directive, MinSpareThreads and MaxSpareThreads MBR definition of, A Detailed Look at the Boot Process , The BIOS (see also boot loaders) (see also boot process) MDA (see Mail Delivery Agent) media directory, The /media/ Directory metacity, Window Managers (see also X) MinSpareServers Apache configuration directive, MinSpareServers and MaxSpareServers MinSpareThreads Apache configuration directive, MinSpareThreads and MaxSpareThreads mnt directory, The /mnt/ Directory modprobe , Kernel Module Utilities module parameters (see kernel modules) modules (see kernel modules) Apache loading, Adding Modules the own, Adding Modules default, Default Modules MTA (see Mail Transfer Agent) MUA (see Mail User Agent) mwm, Window Managers (see also X) N named daemon (see BIND) named.conf (see BIND) nameserver (see BIND) NameVirtualHost Apache configuration directive, NameVirtualHost net program, net netfilter (see iptables) network additional resources, Additional Resources commands /sbin/ifdown, Interface Control Scripts /sbin/ifup, Interface Control Scripts /sbin/service network, Interface Control Scripts configuration, Interface Configuration Files functions, Network Function Files interfaces, Interface Configuration Files alias, Alias and Clone Files channel bonding, Channel Bonding Interfaces clone, Alias and Clone Files dialup, Dialup Interfaces Ethernet, Ethernet Interfaces IPsec, IPsec Interfaces scripts, Network Interfaces Network File System (see NFS) NFS additional resources, Additional Resources installed documentation, Installed Documentation related books, Related Books useful websites, Useful Websites client /etc/fstab, /etc/fstab autofs, autofs configuration, NFS Client Configuration Files mount options, Common NFS Mount Options condrestart, Starting and Stopping NFS how it works, How It Works introducing, Network File System (NFS) portmap, NFS and portmap reloading, Starting and Stopping NFS required services, Required Services restarting, Starting and Stopping NFS security, Securing NFS file permissions, File Permissions host access, Host Access NFSv2/NFSv3 host access, Using NFSv2 or NFSv3 NFSv4 host access, Using NFSv4 server configuration, NFS Server Configuration /etc/exports, The /etc/exports Configuration File exportfs command, The exportfs Command exportfs command with NFSv4, Using exportfs with NFSv4 starting, Starting and Stopping NFS status, Starting and Stopping NFS stopping, Starting and Stopping NFS TCP, How It Works UDP, How It Works NIC modules (see kernel modules) nmblookup program, nmblookup non-secure Web server disabling, The Secure Web Server Virtual Host ntsysv, Runlevel Utilities (see also services) O objects, dynamically shared (see DSOs) OpenLDAP (see LDAP) OpenSSH, Features of SSH (see also SSH) configuration files for, OpenSSH Configuration Files opt directory, The /opt/ Directory Options Apache configuration directive, Options Order Apache configuration directive, Order OS/400, Boot Loaders and System Architecture (see also boot loaders) P packet filtering (see iptables) PAM additional resources, Additional Resources installed documentation, Installed Documentation useful websites, Useful Websites advantages of, Advantages of PAM configuration files, PAM Configuration Files control flags, Control Flag definition of, Pluggable Authentication Modules (PAM) Kerberos and, Kerberos and PAM modules, Module Interface arguments, Module Arguments components, Module Interface creating, Creating PAM Modules interfaces, Module Interface location of, Module Name stacking, Stacking Module Interfaces , Sample PAM Configuration Files pam_console definition of, PAM and Device Ownership pam_timestamp authentication icon and, PAM and Administrative Credential Caching definition of, PAM and Administrative Credential Caching destroying timestamps, Removing the Timestamp File directives, Common pam_timestamp Directives pam_timestamp_check destroying timestamp using, Removing the Timestamp File sample configuration files, Sample PAM Configuration Files service files, PAM Service Files shadow passwords, Sample PAM Configuration Files pam_console (see PAM) pam_timestamp (see PAM) pam_timestamp_check (see PAM) password, Sample PAM Configuration Files (see also PAM) shadow passwords, Sample PAM Configuration Files passwords shadow, Shadow Passwords pdbedit program, pdbedit PidFile Apache configuration directive, PidFile Pluggable Authentication Modules (see PAM) portmap, NFS and portmap (see also NFS) NFS, Troubleshooting NFS and portmap rpcinfo, Troubleshooting NFS and portmap status, Starting and Stopping NFS Postfix, Postfix default installation, The Default Postfix Installation prefdm (see X) proc directory, The /proc/ Directory proc file system /proc/apm, /proc/apm /proc/buddyinfo, /proc/buddyinfo /proc/bus/ directory, /proc/bus/ /proc/cmdline, /proc/cmdline /proc/cpuinfo, /proc/cpuinfo /proc/crypto, /proc/crypto /proc/devices block devices, /proc/devices character devices, /proc/devices /proc/dma, /proc/dma /proc/driver/ directory, /proc/driver/ /proc/execdomains, /proc/execdomains /proc/fb, /proc/fb /proc/filesystems, /proc/filesystems /proc/fs/ directory, /proc/fs /proc/ide/ directory, /proc/ide/ device directories, Device Directories /proc/interrupts, /proc/interrupts /proc/iomem, /proc/iomem /proc/ioports, /proc/ioports /proc/irq/ directory, /proc/irq/ /proc/kcore, /proc/kcore /proc/kmsg, /proc/kmsg /proc/loadavg, /proc/loadavg /proc/locks, /proc/locks /proc/mdstat, /proc/mdstat /proc/meminfo, /proc/meminfo /proc/misc, /proc/misc /proc/modules, /proc/modules /proc/mounts, /proc/mounts /proc/mtrr, /proc/mtrr /proc/net/ directory, /proc/net/ /proc/partitions, /proc/partitions /proc/pci viewing using lspci, /proc/pci /proc/scsi/ directory, /proc/scsi/ /proc/self/ directory, /proc/self/ /proc/slabinfo, /proc/slabinfo /proc/stat, /proc/stat /proc/swaps, /proc/swaps /proc/sys/ directory, /proc/sys/ , Using the sysctl Command (see also sysctl) /proc/sys/dev/ directory, /proc/sys/dev/ /proc/sys/fs/ directory, /proc/sys/fs/ /proc/sys/kernel/ directory, /proc/sys/kernel/ /proc/sys/kernel/exec-shield, /proc/sys/kernel/ /proc/sys/kernel/sysrq (see system request key) /proc/sys/net/ directory, /proc/sys/net/ /proc/sys/vm/ directory, /proc/sys/vm/ /proc/sysrq-trigger, /proc/sysrq-trigger /proc/sysvipc/ directory, /proc/sysvipc/ /proc/tty/ directory, /proc/tty/ /proc/uptime, /proc/uptime /proc/version, /proc/version additional resources, Additional Resources installed documentation, Installed Documentation useful websites, Useful Websites changing files within, Changing Virtual Files , /proc/sys/ , Using the sysctl Command files within, top-level, Top-level Files within the proc File System introduced, The proc File System process directories, Process Directories subdirectories within, Directories within /proc/ viewing files within, Viewing Virtual Files Procmail, Mail Delivery Agents additional resources, Additional Resources configuration, Procmail Configuration recipes, Procmail Recipes delivering, Delivering vs. Non-Delivering Recipes examples, Recipe Examples flags, Flags local lockfiles, Specifying a Local Lockfile non-delivering, Delivering vs. Non-Delivering Recipes SpamAssassin, Spam Filters special actions, Special Conditions and Actions special conditions, Special Conditions and Actions programs running at boot time, Running Additional Programs at Boot Time Proxy Apache configuration directive, Proxy proxy server, ProxyRequests , Cache Directives ProxyRequests Apache configuration directive, ProxyRequests public_html directories, UserDir R rc.local modifying, Running Additional Programs at Boot Time rc.serial, Running Additional Programs at Boot Time (see also setserial command) ReadmeName Apache configuration directive, ReadmeName Red Hat Enterprise Linux-specific file locations /etc/sysconfig/, Special File Locations Under Red Hat Enterprise Linux (see also sysconfig directory) /var/lib/rpm/, Special File Locations Under Red Hat Enterprise Linux /var/spool/up2date, Special File Locations Under Red Hat Enterprise Linux Redirect Apache configuration directive, Redirect rmmod , Kernel Module Utilities root nameserver (see BIND) rpcclient program, rpcclient rpcinfo, Troubleshooting NFS and portmap runlevels (see init command) changing with GRUB, GRUB Interfaces configuration of, Runlevel Utilities (see also services) S Samba (see Samba) Abilities, Samba Features Account Information Databases, Samba Account Information Databases ldapsam, New Backends ldapsam_compat, Backward Compatible Backends mysqlsam, New Backends Plain Text, Backward Compatible Backends smbpasswd, Backward Compatible Backends tdbsam, New Backends xmlsam, New Backends Additional Resources, Additional Resources installed documentation, Installed Documentation Red Hat resources, Red Hat Documentation related books, Related Books useful websites, Useful Websites Backward Compatible Database Backends, Backward Compatible Backends Browsing, Samba Network Browsing CUPS Printing Support, Samba with CUPS Printing Support CUPS smb.conf, Simple smb.conf Settings daemon, Samba Daemons and Related Services nmbd, The nmbd daemon overview, Daemon Overview smbd, The smbd daemon winbindd, The winbindd daemon Introduction, Introduction to Samba Network Browsing, Samba Network Browsing Domain Browsing, Domain Browsing WINS, WINS (Windows Internetworking Name Server) Workgroup Browsing, Workgroup Browsing New Database Backends, New Backends Programs, Samba Distribution Programs findsmb, findsmb make_smbcodepage, make_smbcodepage make_unicodemap, make_unicodemap net, net nmblookup, nmblookup pdbedit, pdbedit rpcclient, rpcclient smbcacls, smbcacls smbclient, smbclient smbcontrol, smbcontrol smbgroupedit, smbgroupedit smbmount, smbmount smbpasswd, smbpasswd smbspool, smbspool smbstatus, smbstatus smbtar, smbtar testparm, testparm testprns, testprns wbinfo, wbinfo Reference, Samba Security Modes, Samba Security Modes Active Directory Security Mode, Active Directory Security Mode (User-Level Security) Domain Security Mode, Domain Security Mode (User-Level Security) Server Security Mode, Server Security Mode (User-Level Security) Share-Level Security, Share-Level Security User Level Security, User-Level Security Server Types, Samba Server Types and the smb.conf File server types Domain Controller, Domain Controller Domain Member, Domain Member Server Stand Alone, Stand-alone Server service conditional restarting, Starting and Stopping Samba reloading, Starting and Stopping Samba restarting, Starting and Stopping Samba starting, Starting and Stopping Samba stopping, Starting and Stopping Samba smb.conf, Samba Server Types and the smb.conf File Active Directory Member Server example, Active Directory Domain Member Server Anonymous Print Server example, Anonymous Print Server Anonymous Read Only example, Anonymous Read-Only Anonymous Read/Write example, Anonymous Read/Write BDC using LDAP, Backup Domain Controller (BDC) using LDAP NT4-style Domain Member example, Windows NT4-based Domain Member Server PDC using Active Directory, Primary Domain Controller (PDC) with Active Directory PDC using LDAP, Primary Domain Controller (PDC) using LDAP PDC using tdbsam, Primary Domain Controller (PDC) using tdbsam Secure File and Print Server example, Secure Read/Write File and Print Server WINS, WINS (Windows Internetworking Name Server) sbin directory, The /sbin/ Directory ScriptAlias Apache configuration directive, ScriptAlias SCSI modules (see kernel modules) security running Apache without, Virtual Hosts SELinux, SELinux additional resources, Additional Resources documentation, Red Hat Documentation installed documentation, Installed Documentation websites, Useful Websites introduction, Introduction to SELinux related files, Files Related to SELinux /etc/selinux/ Directory, The /etc/selinux/ Directory /etc/sysconfig/selinux, The /etc/sysconfig/selinux Configuration File /selinux/ pseudo-file system, The /selinux/ Pseudo-File System configuration, SELinux Configuration Files utilities, SELinux Utilities Sendmail, Sendmail additional resources, Additional Resources aliases, Masquerading common configuration changes, Common Sendmail Configuration Changes default installation, The Default Sendmail Installation LDAP and, Using Sendmail with LDAP limitations, Purpose and Limitations masquerading, Masquerading purpose, Purpose and Limitations spam, Stopping Spam with UUCP, Common Sendmail Configuration Changes serial ports (see setserial command) server side includes, Options , AddType ServerAdmin Apache configuration directive, ServerAdmin ServerName Apache configuration directive, ServerName ServerRoot Apache configuration directive, ServerRoot ServerSignature Apache configuration directive, ServerSignature services configuring with chkconfig, Runlevel Utilities configuring with ntsysv, Runlevel Utilities configuring with Services Configuration Tool, Runlevel Utilities Services Configuration Tool, Runlevel Utilities (see also services) SetEnvIf Apache configuration directive, SetEnvIf setserial command configuring, Running Additional Programs at Boot Time shadow (see password) shadow passwords overview of, Shadow Passwords shutdown, Shutting Down (see also halt) slab pools (see /proc/slabinfo) slapadd command, OpenLDAP Daemons and Utilities (see also LDAP) slapcat command, OpenLDAP Daemons and Utilities (see also LDAP) slapd command, OpenLDAP Daemons and Utilities (see also LDAP) slapindex command, OpenLDAP Daemons and Utilities (see also LDAP) slappasswd command, OpenLDAP Daemons and Utilities (see also LDAP) slave nameserver (see BIND) slurpd command, OpenLDAP Daemons and Utilities (see also LDAP) smbcacls program, smbcacls smbclient program, smbclient smbcontrol program, smbcontrol smbgroupedit program, smbgroupedit smbmount program, smbmount smbpasswd program, smbpasswd smbspool program, smbspool smbstatus program, smbstatus smbtar program, smbtar SpamAssassin using with Procmail, Spam Filters srv directory, The /srv/ Directory SSH protocol, SSH Protocol additional resources, Additional Resources installed documentation, Installed Documentation related books, Related Books useful websites, Useful Websites authentication, Authentication configuration files, OpenSSH Configuration Files connection sequence, Event Sequence of an SSH Connection features of, Features of SSH insecure protocols and, Requiring SSH for Remote Connections layers of channels, Channels transport layer, Transport Layer port forwarding, Port Forwarding requiring for remote login, Requiring SSH for Remote Connections security risks, Why Use SSH? version 1, SSH Protocol Versions version 2, SSH Protocol Versions X11 forwarding, X11 Forwarding SSL configuration, Configuration Directives for SSL StartServers Apache configuration directive, StartServers startx, Runlevel 3 (see X) (see also X) stunnel, Securing Email Client Communications SuexecUserGroup Apache configuration directive, The suexec Module , SuexecUserGroup sys directory, The /sys/ Directory sysconfig directory, Special File Locations Under Red Hat Enterprise Linux /etc/sysconfig/amd, /etc/sysconfig/amd /etc/sysconfig/apm-scripts/ directory, Directories in the /etc/sysconfig/ Directory /etc/sysconfig/apmd, /etc/sysconfig/apmd /etc/sysconfig/arpwatch, /etc/sysconfig/arpwatch /etc/sysconfig/authconfig, /etc/sysconfig/authconfig /etc/sysconfig/autofs, /etc/sysconfig/autofs /etc/sysconfig/clock, /etc/sysconfig/clock /etc/sysconfig/desktop, /etc/sysconfig/desktop /etc/sysconfig/devlabel, /etc/sysconfig/devlabel /etc/sysconfig/dhcpd, /etc/sysconfig/dhcpd /etc/sysconfig/exim, /etc/sysconfig/exim /etc/sysconfig/firstboot, /etc/sysconfig/firstboot /etc/sysconfig/gpm, /etc/sysconfig/gpm /etc/sysconfig/harddisks, /etc/sysconfig/harddisks /etc/sysconfig/hwconf, /etc/sysconfig/hwconf /etc/sysconfig/init, /etc/sysconfig/init /etc/sysconfig/ip6tables-config, /etc/sysconfig/ip6tables-config /etc/sysconfig/iptables, Saving iptables Rules /etc/sysconfig/iptables-config, /etc/sysconfig/iptables-config /etc/sysconfig/irda, /etc/sysconfig/irda /etc/sysconfig/keyboard, /etc/sysconfig/keyboard /etc/sysconfig/kudzu, /etc/sysconfig/kudzu /etc/sysconfig/mouse, /etc/sysconfig/mouse /etc/sysconfig/named, /etc/sysconfig/named /etc/sysconfig/netdump, /etc/sysconfig/netdump /etc/sysconfig/network, /etc/sysconfig/network /etc/sysconfig/network-scripts/ directory, Network Interfaces /etc/sysconfig/ntpd, /etc/sysconfig/ntpd /etc/sysconfig/pcmcia, /etc/sysconfig/pcmcia /etc/sysconfig/radvd, /etc/sysconfig/radvd /etc/sysconfig/rawdevices, /etc/sysconfig/rawdevices /etc/sysconfig/samba, /etc/sysconfig/samba /etc/sysconfig/selinux, /etc/sysconfig/selinux /etc/sysconfig/sendmail, /etc/sysconfig/sendmail /etc/sysconfig/spamassassin, /etc/sysconfig/spamassassin /etc/sysconfig/squid, /etc/sysconfig/squid /etc/sysconfig/system-config-securitylevel , /etc/sysconfig/system-config-securitylevel /etc/sysconfig/system-config-users, /etc/sysconfig/system-config-users /etc/sysconfig/system-logviewer, /etc/sysconfig/system-logviewer /etc/sysconfig/tux, /etc/sysconfig/tux /etc/sysconfig/vncservers, /etc/sysconfig/vncservers /etc/sysconfig/xinetd, /etc/sysconfig/xinetd additional information about, The sysconfig Directory additional resources, Additional Resources installed documentation, Installed Documentation directories in, Directories in the /etc/sysconfig/ Directory files found in, Files in the /etc/sysconfig/ Directory sysctl configuring with /etc/sysctl.conf, Using the sysctl Command controlling /proc/sys/, Using the sysctl Command SysRq (see system request key) system request key enabling, /proc/sys/ System Request Key definition of, /proc/sys/ setting timing for, /proc/sys/kernel/ SysV init (see init command) T TCP wrappers, xinetd (see also xinetd) additional resources, Additional Resources installed documentation, Installed Documentation related books, Related Books useful websites, Useful Websites advantages of, Advantages of TCP Wrappers configuration files /etc/hosts.allow, TCP Wrappers , TCP Wrappers Configuration Files /etc/hosts.deny, TCP Wrappers , TCP Wrappers Configuration Files access control option, Access Control expansions, Expansions formatting rules within, Formatting Access Rules hosts access files, TCP Wrappers Configuration Files log option, Logging operators, Operators option fields, Option Fields patterns, Patterns shell command option, Shell Commands spawn option, Shell Commands twist option, Shell Commands wildcards, Wildcards definition of, TCP Wrappers introducing, TCP Wrappers and xinetd testparm program, testparm testprns program, testprns ThreadsPerChild Apache configuration directive, ThreadsPerChild Timeout Apache configuration directive, Timeout TLB cache (see hugepages) troubleshooting error log, ErrorLog twm, Window Managers (see also X) TypesConfig Apache configuration directive, TypesConfig U UseCanonicalName Apache configuration directive, UseCanonicalName User Apache configuration directive, User user private groups (see groups) and shared directories, Group Directories UserDir Apache configuration directive, UserDir users /etc/passwd, Standard Users additional resources, Additional Resources installed documentation, Installed Documentation related books, Related Books introducing, Users and Groups personal HTML directories, UserDir standard, Standard Users tools for management of User Manager, User and Group Management Tools useradd, User and Group Management Tools UID, Users and Groups usr directory, The /usr/ Directory usr/local/ directory, The /usr/local/ Directory V var directory, The /var/ Directory var/lib/rpm/ directory, Special File Locations Under Red Hat Enterprise Linux var/spool/up2date/ directory, Special File Locations Under Red Hat Enterprise Linux virtual file system (see proc file system) virtual files (see proc file system) virtual hosts configuring, Virtual Hosts Listen command, Setting Up Virtual Hosts name-based, Virtual Hosts Options, Options server side includes, AddType VirtualHost Apache configuration directive, VirtualHost vsftpd, FTP Servers (see also FTP) additional resources, Additional Resources installed documentation, Installed Documentation related books, Related Books useful websites, Useful Websites condrestart, Starting and Stopping vsftpd configuration file /etc/vsftpd/vsftpd.conf, vsftpd Configuration Options access controls, Log In Options and Access Controls anonymous user options, Anonymous User Options daemon options, Daemon Options directory options, Directory Options file transfer options, File Transfer Options format of, vsftpd Configuration Options local user options, Local User Options logging options, Logging Options login options, Log In Options and Access Controls network options, Network Options multihome configuration, Starting Multiple Copies of vsftpd restarting, Starting and Stopping vsftpd RPM files installed by, Files Installed with vsftpd security features, FTP Servers starting, Starting and Stopping vsftpd starting multiple copies of, Starting Multiple Copies of vsftpd status, Starting and Stopping vsftpd stopping, Starting and Stopping vsftpd W wbinfo program, wbinfo webmaster email address for, ServerAdmin window managers (see X) X X /etc/X11/xorg.conf boolean values for, The Structure Device, Device DRI, DRI Files section, Files InputDevice section, InputDevice introducing, xorg.conf Module section, Module Monitor, Monitor Screen, Screen Section tag, The Structure ServerFlags section, ServerFlags ServerLayout section, ServerLayout structure of, The Structure additional resources, Additional Resources installed documentation, Installed Documentation related books, Related Books useful websites, Useful Websites configuration files /etc/X11/ directory, X Server Configuration Files /etc/X11/xorg.conf, xorg.conf options within, X Server Configuration Files server options, xorg.conf desktop environments GNOME, Desktop Environments KDE, Desktop Environments display managers configuration of preferred, Runlevel 5 definition of, Runlevel 5 GNOME, Runlevel 5 KDE, Runlevel 5 prefdm script, Runlevel 5 xdm, Runlevel 5 fonts core X font subsystem, Core X Font System Fontconfig, Fontconfig Fontconfig, adding fonts to, Adding Fonts to Fontconfig FreeType, Fontconfig introducing, Fonts X Font Server, Core X Font System X Render Extension, Fontconfig xfs, Core X Font System xfs configuration, xfs Configuration xfs, adding fonts to, Adding Fonts to xfs Xft, Fontconfig introducing, The X Window System runlevels 3, Runlevel 3 5, Runlevel 5 runlevels and, Runlevels and X utilities system-config-display, The X11R6.8 Release window managers kwin, Window Managers metacity, Window Managers mwm, Window Managers twm, Window Managers X clients, The X Window System , Desktop Environments and Window Managers desktop environments, Desktop Environments startx command, Runlevel 3 window managers, Window Managers xinit command, Runlevel 3 X server, The X Window System features of, The X11R6.8 Release X Window System (see X) X.500 (see LDAP) X.500 Lite (see LDAP) xinetd, xinetd (see also TCP wrappers) additional resources installed documentation, Installed Documentation related books, Related Books useful websites, Useful Websites configuration files, xinetd Configuration Files /etc/xinetd.conf, The /etc/xinetd.conf File /etc/xinetd.d/ directory, The /etc/xinetd.d/ Directory access control options, Access Control Options binding options, Binding and Redirection Options logging options, The /etc/xinetd.conf File , The /etc/xinetd.d/ Directory , Logging Options redirection options, Binding and Redirection Options resource management options, Resource Management Options DoS attacks and, Resource Management Options introducing, TCP Wrappers and xinetd , xinetd relationship with TCP wrappers, Access Control Options xinit (see X) Xorg (see Xorg) Y YABOOT, Boot Loaders and System Architecture (see also boot loaders) Z z/IPL, Boot Loaders and System Architecture (see also boot loaders)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/ix01
Chapter 6. Composable services and custom roles
Chapter 6. Composable services and custom roles The overcloud usually consists of nodes in predefined roles such as Controller nodes, Compute nodes, and different storage node types. Each of these default roles contains a set of services defined in the core heat template collection on the director node. However, you can also create custom roles that contain specific sets of services. You can use this flexibility to create different combinations of services on different roles. This chapter explores the architecture of custom roles, composable services, and methods for using them. 6.1. Supported role architecture The following architectures are available when you use custom roles and composable services: Default architecture Uses the default roles_data files. All controller services are contained within one Controller role. Supported standalone roles Use the predefined files in /usr/share/openstack-tripleo-heat-templates/roles to generate a custom roles_data file. For more information, see Section 6.4, "Supported custom roles" . Custom composable services Create your own roles and use them to generate a custom roles_data file. Note that only a limited number of composable service combinations have been tested and verified and Red Hat cannot support all composable service combinations. 6.2. Examining the roles_data file The roles_data file contains a YAML-formatted list of the roles that director deploys onto nodes. Each role contains definitions of all of the services that comprise the role. Use the following example snippet to understand the roles_data syntax: The core heat template collection contains a default roles_data file located at /usr/share/openstack-tripleo-heat-templates/roles_data.yaml . The default file contains definitions of the following role types: Controller Compute BlockStorage ObjectStorage CephStorage . The openstack overcloud deploy command includes the default roles_data.yaml file during deployment. However, you can use the -r argument to override this file with a custom roles_data file: 6.3. Creating a roles_data file Although you can create a custom roles_data file manually, you can also generate the file automatically using individual role templates. Director provides several commands to manage role templates and automatically generate a custom roles_data file. Procedure List the default role templates: View the role definition in YAML format with the openstack overcloud roles show command: Generate a custom roles_data file. Use the openstack overcloud roles generate command to join multiple predefined roles into a single file. For example, run the following command to generate a roles_data.yaml file that contains the Controller , Compute , and Networker roles: Use the -o option to define the name out of the output file. This command creates a custom roles_data file. However, the example uses the Controller and Networker roles, which both contain the same networking agents. This means that the networking services scale from the Controller role to the Networker role and the overcloud balances the load for networking services between the Controller and Networker nodes. To make this Networker role standalone, you can create your own custom Controller role, as well as any other role that you require. This allows you to generate a roles_data file from your own custom roles. Copy the directory from the core heat template collection to the home directory of the stack user: Add or modify the custom role files in this directory. Use the --roles-path option with any of the role sub-commands to use this directory as the source for your custom roles: This command generates a single my_roles_data.yaml file from the individual roles in the ~/roles directory. Note The default roles collection also contains the ControllerOpenStack role, which does not include services for Networker , Messaging , and Database roles. You can use the ControllerOpenStack in combination with the standalone Networker , Messaging , and Database roles. 6.4. Supported custom roles The following table contains information about the available custom roles. You can find custom role templates in the /usr/share/openstack-tripleo-heat-templates/roles directory. Role Description File BlockStorage OpenStack Block Storage (cinder) node. BlockStorage.yaml CephAll Full standalone Ceph Storage node. Includes OSD, MON, Object Gateway (RGW), Object Operations (MDS), Manager (MGR), and RBD Mirroring. CephAll.yaml CephFile Standalone scale-out Ceph Storage file role. Includes OSD and Object Operations (MDS). CephFile.yaml CephObject Standalone scale-out Ceph Storage object role. Includes OSD and Object Gateway (RGW). CephObject.yaml CephStorage Ceph Storage OSD node role. CephStorage.yaml ComputeAlt Alternate Compute node role. ComputeAlt.yaml ComputeDVR DVR enabled Compute node role. ComputeDVR.yaml ComputeHCI Compute node with hyper-converged infrastructure. Includes Compute and Ceph OSD services. ComputeHCI.yaml ComputeInstanceHA Compute Instance HA node role. Use in conjunction with the environments/compute-instanceha.yaml` environment file. ComputeInstanceHA.yaml ComputeLiquidio Compute node with Cavium Liquidio Smart NIC. ComputeLiquidio.yaml ComputeOvsDpdkRT Compute OVS DPDK RealTime role. ComputeOvsDpdkRT.yaml ComputeOvsDpdk Compute OVS DPDK role. ComputeOvsDpdk.yaml ComputePPC64LE Compute role for ppc64le servers. ComputePPC64LE.yaml ComputeRealTime Compute role optimized for real-time behaviour. When using this role, it is mandatory that an overcloud-realtime-compute image is available and the role specific parameters IsolCpusList , NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet are set according to the hardware of the real-time compute nodes. ComputeRealTime.yaml ComputeSriovRT Compute SR-IOV RealTime role. ComputeSriovRT.yaml ComputeSriov Compute SR-IOV role. ComputeSriov.yaml Compute Standard Compute node role. Compute.yaml ControllerAllNovaStandalone Controller role that does not contain the database, messaging, networking, and OpenStack Compute (nova) control components. Use in combination with the Database , Messaging , Networker , and Novacontrol roles. ControllerAllNovaStandalone.yaml ControllerNoCeph Controller role with core Controller services loaded but no Ceph Storage (MON) components. This role handles database, messaging, and network functions but not any Ceph Storage functions. ControllerNoCeph.yaml ControllerNovaStandalone Controller role that does not contain the OpenStack Compute (nova) control component. Use in combination with the Novacontrol role. ControllerNovaStandalone.yaml ControllerOpenstack Controller role that does not contain the database, messaging, and networking components. Use in combination with the Database , Messaging , and Networker roles. ControllerOpenstack.yaml ControllerStorageNfs Controller role with all core services loaded and uses Ceph NFS. This roles handles database, messaging, and network functions. ControllerStorageNfs.yaml Controller Controller role with all core services loaded. This roles handles database, messaging, and network functions. Controller.yaml ControllerSriov (ML2/OVN) Same as the normal Controller role but with the OVN Metadata agent deployed. ControllerSriov.yaml Database Standalone database role. Database managed as a Galera cluster using Pacemaker. Database.yaml HciCephAll Compute node with hyper-converged infrastructure and all Ceph Storage services. Includes OSD, MON, Object Gateway (RGW), Object Operations (MDS), Manager (MGR), and RBD Mirroring. HciCephAll.yaml HciCephFile Compute node with hyper-converged infrastructure and Ceph Storage file services. Includes OSD and Object Operations (MDS). HciCephFile.yaml HciCephMon Compute node with hyper-converged infrastructure and Ceph Storage block services. Includes OSD, MON, and Manager. HciCephMon.yaml HciCephObject Compute node with hyper-converged infrastructure and Ceph Storage object services. Includes OSD and Object Gateway (RGW). HciCephObject.yaml IronicConductor Ironic Conductor node role. IronicConductor.yaml Messaging Standalone messaging role. RabbitMQ managed with Pacemaker. Messaging.yaml Networker Standalone networking role. Runs OpenStack networking (neutron) agents on their own. If your deployment uses the ML2/OVN mechanism driver, see additional steps in Deploying a Custom Role with ML2/OVN in the Networking Guide . Networker.yaml NetworkerSriov Same as the normal Networker role but with the OVN Metadata agent deployed. See additional steps in Deploying a Custom Role with ML2/OVN in the Networking Guide . NetworkerSriov.yaml Novacontrol Standalone nova-control role to run OpenStack Compute (nova) control agents on their own. Novacontrol.yaml ObjectStorage Swift Object Storage node role. ObjectStorage.yaml Telemetry Telemetry role with all the metrics and alarming services. Telemetry.yaml 6.5. Examining role parameters Each role contains the following parameters: name (Mandatory) The name of the role, which is a plain text name with no spaces or special characters. Check that the chosen name does not cause conflicts with other resources. For example, use Networker as a name instead of Network . description (Optional) A plain text description for the role. tags (Optional) A YAML list of tags that define role properties. Use this parameter to define the primary role with both the controller and primary tags together: Important If you do not tag the primary role, the first role that you define becomes the primary role. Ensure that this role is the Controller role. networks A YAML list or dictionary of networks that you want to configure on the role. If you use a YAML list, list each composable network: If you use a dictionary, map each network to a specific subnet in your composable networks. Default networks include External , InternalApi , Storage , StorageMgmt , Tenant , and Management . CountDefault (Optional) Defines the default number of nodes that you want to deploy for this role. HostnameFormatDefault (Optional) Defines the default hostname format for the role. The default naming convention uses the following format: For example, the default Controller nodes are named: disable_constraints (Optional) Defines whether to disable OpenStack Compute (nova) and OpenStack Image Storage (glance) constraints when deploying with director. Use this parameter when you deploy an overcloud with pre-provisioned nodes. For more information, see Configuring a Basic Overcloud with Pre-Provisioned Nodes in the Director Installation and Usage guide. update_serial (Optional) Defines how many nodes to update simultaneously during the OpenStack update options. In the default roles_data.yaml file: The default is 1 for Controller, Object Storage, and Ceph Storage nodes. The default is 25 for Compute and Block Storage nodes. If you omit this parameter from a custom role, the default is 1 . ServicesDefault (Optional) Defines the default list of services to include on the node. For more information, see Section 6.8, "Examining composable service architecture" . You can use these parameters to create new roles and also define which services to include in your roles. The openstack overcloud deploy command integrates the parameters from the roles_data file into some of the Jinja2-based templates. For example, at certain points, the overcloud.j2.yaml heat template iterates over the list of roles from roles_data.yaml and creates parameters and resources specific to each respective role. For example, the following snippet contains the resource definition for each role in the overcloud.j2.yaml heat template: This snippet shows how the Jinja2-based template incorporates the {{role.name}} variable to define the name of each role as an OS::Heat::ResourceGroup resource. This in turn uses each name parameter from the roles_data file to name each respective OS::Heat::ResourceGroup resource. 6.6. Creating a new role You can use the composable service architecture to create new roles according to the requirements of your deployment. For example, you might want to create a new Horizon role to host only the OpenStack Dashboard ( horizon ). Note Role names must start with a letter, end with a letter or digit, and contain only letters, digits, and hyphens. Underscores must never be used in role names. Procedure Create a custom copy of the default roles directory: Create a new file called ~/roles/Horizon.yaml and create a new Horizon role that contains base and core OpenStack Dashboard services: Set the name parameter to the name of the custom role. Custom role names have a maximum length of 47 characters. Set the CountDefault parameter to 1 so that a default overcloud always includes the Horizon node. Optional: If you want to scale the services in an existing overcloud, retain the existing services on the Controller role. If you want to create a new overcloud and you want the OpenStack Dashboard to remain on the standalone role, remove the OpenStack Dashboard components from the Controller role definition: Generate the new roles_data-horizon.yaml file using the ~/roles directory as the source: Define a new flavor for this role so that you can tag specific nodes. For this example, use the following commands to create a horizon flavor: Create a horizon flavor: Note These properties are not used for scheduling instances, however, the Compute scheduler does use the disk size to determine the root partition size. Tag each bare metal node that you want to designate for the Dashboard service (horizon) with a custom resource class: Replace <NODE> with the ID of the bare metal node. Associate the horizon flavor with the custom resource class: To determine the name of a custom resource class that corresponds to a resource class of a bare metal node, convert the resource class to uppercase, replace punctuation with an underscore, and prefix the value with CUSTOM_ . Note A flavor can request only one instance of a bare metal resource class. Set the following flavor properties to prevent the Compute scheduler from using the bare metal flavor properties for scheduling instances: Define the Horizon node count and flavor using the following environment file snippet: Include the new roles_data-horizon.yaml file and environment file in the openstack overcloud deploy command, along with any other environment files relevant to your deployment: This configuration creates a three-node overcloud that consists of one Controller node, one Compute node, and one Networker node. To view the list of nodes in your overcloud, run the following command: 6.7. Guidelines and limitations Note the following guidelines and limitations for the composable role architecture. For services not managed by Pacemaker: You can assign services to standalone custom roles. You can create additional custom roles after the initial deployment and deploy them to scale existing services. For services managed by Pacemaker: You can assign Pacemaker-managed services to standalone custom roles. Pacemaker has a 16 node limit. If you assign the Pacemaker service ( OS::TripleO::Services::Pacemaker ) to 16 nodes, subsequent nodes must use the Pacemaker Remote service ( OS::TripleO::Services::PacemakerRemote ) instead. You cannot have the Pacemaker service and Pacemaker Remote service on the same role. Do not include the Pacemaker service ( OS::TripleO::Services::Pacemaker ) on roles that do not contain Pacemaker-managed services. You cannot scale up or scale down a custom role that contains OS::TripleO::Services::Pacemaker or OS::TripleO::Services::PacemakerRemote services. General limitations: You cannot change custom roles and composable services during a major version upgrade. You cannot modify the list of services for any role after deploying an overcloud. Modifying the service lists after Overcloud deployment can cause deployment errors and leave orphaned services on nodes. 6.8. Examining composable service architecture The core heat template collection contains two sets of composable service templates: deployment contains the templates for key OpenStack services. puppet/services contains legacy templates for configuring composable services. In some cases, the composable services use templates from this directory for compatibility. In most cases, the composable services use the templates in the deployment directory. Each template contains a description that identifies its purpose. For example, the deployment/time/ntp-baremetal-puppet.yaml service template contains the following description: These service templates are registered as resources specific to a Red Hat OpenStack Platform deployment. This means that you can call each resource using a unique heat resource namespace defined in the overcloud-resource-registry-puppet.j2.yaml file. All services use the OS::TripleO::Services namespace for their resource type. Some resources use the base composable service templates directly: However, core services require containers and use the containerized service templates. For example, the keystone containerized service uses the following resource: These containerized templates usually reference other templates to include dependencies. For example, the deployment/keystone/keystone-container-puppet.yaml template stores the output of the base template in the ContainersCommon resource: The containerized template can then incorporate functions and data from the containers-common.yaml template. The overcloud.j2.yaml heat template includes a section of Jinja2-based code to define a service list for each custom role in the roles_data.yaml file: For the default roles, this creates the following service list parameters: ControllerServices , ComputeServices , BlockStorageServices , ObjectStorageServices , and CephStorageServices . You define the default services for each custom role in the roles_data.yaml file. For example, the default Controller role contains the following content: These services are then defined as the default list for the ControllerServices parameter. Note You can also use an environment file to override the default list for the service parameters. For example, you can define ControllerServices as a parameter_default in an environment file to override the services list from the roles_data.yaml file. 6.9. Adding and removing services from roles The basic method of adding or removing services involves creating a copy of the default service list for a node role and then adding or removing services. For example, you might want to remove OpenStack Orchestration (heat) from the Controller nodes. Procedure Create a custom copy of the default roles directory: Edit the ~/roles/Controller.yaml file and modify the service list for the ServicesDefault parameter. Scroll to the OpenStack Orchestration services and remove them: Generate the new roles_data file: Include this new roles_data file when you run the openstack overcloud deploy command: This command deploys an overcloud without OpenStack Orchestration services installed on the Controller nodes. Note You can also disable services in the roles_data file using a custom environment file. Redirect the services to disable to the OS::Heat::None resource. For example: 6.10. Enabling disabled services Some services are disabled by default. These services are registered as null operations ( OS::Heat::None ) in the overcloud-resource-registry-puppet.j2.yaml file. For example, the Block Storage backup service ( cinder-backup ) is disabled: To enable this service, include an environment file that links the resource to its respective heat templates in the puppet/services directory. Some services have predefined environment files in the environments directory. For example, the Block Storage backup service uses the environments/cinder-backup.yaml file, which contains the following entry: Procedure Add an entry in an environment file that links the CinderBackup service to the heat template that contains the cinder-backup configuration: This entry overrides the default null operation resource and enables the service. Include this environment file when you run the openstack overcloud deploy command: 6.11. Creating a generic node with no services You can create generic Red Hat Enterprise Linux 8.4 nodes without any OpenStack services configured. This is useful when you need to host software outside of the core Red Hat OpenStack Platform (RHOSP) environment. For example, RHOSP provides integration with monitoring tools such as Kibana and Sensu. For more information, see the Monitoring Tools Configuration Guide . While Red Hat does not provide support for the monitoring tools themselves, director can create a generic Red Hat Enterprise Linux 8.4 node to host these tools. Note The generic node still uses the base overcloud-full image rather than a base Red Hat Enterprise Linux 8 image. This means the node has some Red Hat OpenStack Platform software installed but not enabled or configured. Procedure Create a generic role in your custom roles_data.yaml file that does not contain a ServicesDefault list: Ensure that you retain the existing Controller and Compute roles. Create an environment file generic-node-params.yaml to specify how many generic Red Hat Enterprise Linux 8 nodes you require and the flavor when selecting nodes to provision: Include both the roles file and the environment file when you run the openstack overcloud deploy command: This configuration deploys a three-node environment with one Controller node, one Compute node, and one generic Red Hat Enterprise Linux 8 node.
[ "- name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - name: Compute description: | Basic Compute Node role ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient", "openstack overcloud deploy --templates -r ~/templates/roles_data-custom.yaml", "openstack overcloud roles list BlockStorage CephStorage Compute ComputeHCI ComputeOvsDpdk Controller", "openstack overcloud roles show Compute", "openstack overcloud roles generate -o ~/roles_data.yaml Controller Compute Networker", "cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.", "openstack overcloud roles generate -o my_roles_data.yaml --roles-path ~/roles Controller Compute Networker", "- name: Controller tags: - primary - controller", "networks: - External - InternalApi - Storage - StorageMgmt - Tenant", "networks: External: subnet: external_subnet InternalApi: subnet: internal_api_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet Tenant: subnet: tenant_subnet", "[STACK NAME]-[ROLE NAME]-[NODE ID]", "overcloud-controller-0 overcloud-controller-1 overcloud-controller-2", "{{role.name}}: type: OS::Heat::ResourceGroup depends_on: Networks properties: count: {get_param: {{role.name}}Count} removal_policies: {get_param: {{role.name}}RemovalPolicies} resource_def: type: OS::TripleO::{{role.name}} properties: CloudDomain: {get_param: CloudDomain} ServiceNetMap: {get_attr: [ServiceNetMap, service_net_map]} EndpointMap: {get_attr: [EndpointMap, endpoint_map]}", "cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.", "- name: Horizon CountDefault: 1 HostnameFormatDefault: '%stackname%-horizon-%index%' ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::Kernel - OS::TripleO::Services::Ntp - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::SensuClient - OS::TripleO::Services::FluentdClient - OS::TripleO::Services::AuditD - OS::TripleO::Services::Collectd - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::Apache - OS::TripleO::Services::Horizon", "- name: Controller CountDefault: 1 ServicesDefault: - OS::TripleO::Services::GnocchiMetricd - OS::TripleO::Services::GnocchiStatsd - OS::TripleO::Services::HAproxy - OS::TripleO::Services::HeatApi - OS::TripleO::Services::HeatApiCfn - OS::TripleO::Services::HeatApiCloudwatch - OS::TripleO::Services::HeatEngine # - OS::TripleO::Services::Horizon # Remove this service - OS::TripleO::Services::IronicApi - OS::TripleO::Services::IronicConductor - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Keepalived", "openstack overcloud roles generate -o roles_data-horizon.yaml --roles-path ~/roles Controller Compute Horizon", "(undercloud)USD openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 horizon", "(undercloud)USD openstack baremetal node set --resource-class baremetal.HORIZON <NODE>", "(undercloud)USD openstack flavor set --property resources:CUSTOM_BAREMETAL_HORIZON=1 horizon", "(undercloud)USD openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 horizon", "parameter_defaults: OvercloudHorizonFlavor: horizon HorizonCount: 1", "openstack overcloud deploy --templates -r ~/templates/roles_data-horizon.yaml -e ~/templates/node-count-flavor.yaml", "openstack server list", "description: > NTP service deployment using puppet, this YAML file creates the interface between the HOT template and the puppet manifest that actually installs and configure NTP.", "resource_registry: OS::TripleO::Services::Ntp: deployment/time/ntp-baremetal-puppet.yaml", "resource_registry: OS::TripleO::Services::Keystone: deployment/keystone/keystone-container-puppet.yaml", "resources: ContainersCommon: type: ../containers-common.yaml", "{{role.name}}Services: description: A list of service resources (configured in the heat resource_registry) which represent nested stacks for each service that should get installed on the {{role.name}} role. type: comma_delimited_list default: {{role.ServicesDefault|default([])}}", "- name: Controller CountDefault: 1 ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephMon - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephRgw - OS::TripleO::Services::CinderApi - OS::TripleO::Services::CinderBackup - OS::TripleO::Services::CinderScheduler - OS::TripleO::Services::CinderVolume - OS::TripleO::Services::Core - OS::TripleO::Services::Kernel - OS::TripleO::Services::Keystone - OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry", "cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.", "- OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry - OS::TripleO::Services::HeatApi # Remove this service - OS::TripleO::Services::HeatApiCfn # Remove this service - OS::TripleO::Services::HeatApiCloudwatch # Remove this service - OS::TripleO::Services::HeatEngine # Remove this service - OS::TripleO::Services::MySQL - OS::TripleO::Services::NeutronDhcpAgent", "openstack overcloud roles generate -o roles_data-no_heat.yaml --roles-path ~/roles Controller Compute Networker", "openstack overcloud deploy --templates -r ~/templates/roles_data-no_heat.yaml", "resource_registry: OS::TripleO::Services::HeatApi: OS::Heat::None OS::TripleO::Services::HeatApiCfn: OS::Heat::None OS::TripleO::Services::HeatApiCloudwatch: OS::Heat::None OS::TripleO::Services::HeatEngine: OS::Heat::None", "OS::TripleO::Services::CinderBackup: OS::Heat::None", "resource_registry: OS::TripleO::Services::CinderBackup: ../podman/services/pacemaker/cinder-backup.yaml", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml", "- name: Generic - name: Controller CountDefault: 1 ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - name: Compute CountDefault: 1 ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient", "parameter_defaults: OvercloudGenericFlavor: baremetal GenericCount: 1", "openstack overcloud deploy --templates -r ~/templates/roles_data_with_generic.yaml -e ~/templates/generic-node-params.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/advanced_overcloud_customization/assembly_composable-services-and-custom-roles
Appendix E. Permissions required to provision hosts
Appendix E. Permissions required to provision hosts The following list provides an overview of the permissions a non-admin user requires to provision hosts. Resource name Permissions Additional details Activation Keys view_activation_keys Ansible role view_ansible_roles Required if Ansible is used. Architecture view_architectures Compute profile view_compute_profiles Compute resource view_compute_resources, create_compute_resources, destroy_compute_resources, power_compute_resources Required to provision bare-metal hosts. view_compute_resources_vms, create_compute_resources_vms, destroy_compute_resources_vms, power_compute_resources_vms Required to provision virtual machines. Content Views view_content_views Domain view_domains Environment view_environments Host view_hosts, create_hosts, edit_hosts, destroy_hosts, build_hosts, power_hosts, play_roles_on_host view_discovered_hosts, submit_discovered_hosts, auto_provision_discovered_hosts, provision_discovered_hosts, edit_discovered_hosts, destroy_discovered_hosts Required if the Discovery service is enabled. Hostgroup view_hostgroups, create_hostgroups, edit_hostgroups, play_roles_on_hostgroup Image view_images Lifecycle environment view_lifecycle_environments Location view_locations Medium view_media Operatingsystem view_operatingsystems Organization view_organizations Parameter view_params, create_params, edit_params, destroy_params Product and Repositories view_products Provisioning template view_provisioning_templates Ptable view_ptables Capsule view_smart_proxies, view_smart_proxies_puppetca view_openscap_proxies Required if the OpenSCAP plugin is enabled. Subnet view_subnets Additional resources Creating a Role in Administering Red Hat Satellite Adding Permissions to a Role in Administering Red Hat Satellite Assigning Roles to a User in Administering Red Hat Satellite
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/provisioning_hosts/permissions-required-to-provision-hosts_provisioning
A.7. Performance Co-Pilot (PCP)
A.7. Performance Co-Pilot (PCP) Performance Co-Pilot (PCP) provides a large number of command-line tools, graphical tools, and libraries. For more information on these tools, see their respective manual pages. Table A.1. System Services Distributed with Performance Co-Pilot in Red Hat Enterprise Linux 7 Name Description pmcd The Performance Metric Collector Daemon (PMCD). pmie The Performance Metrics Inference Engine. pmlogger The performance metrics logger. pmmgr Manages a collection of PCP daemons for a set of discovered local and remote hosts running the Performance Metric Collector Daemon (PMCD) according to zero or more configuration directories. pmproxy The Performance Metric Collector Daemon (PMCD) proxy server. pmwebd Binds a subset of the Performance Co-Pilot client API to RESTful web applications using the HTTP protocol. Table A.2. Tools Distributed with Performance Co-Pilot in Red Hat Enterprise Linux 7 Name Description pcp Displays the current status of a Performance Co-Pilot installation. pmatop Shows the system-level occupation of the most critical hardware resources from the performance point of view: CPU, memory, disk, and network. pmchart Plots performance metrics values available through the facilities of the Performance Co-Pilot. pmclient Displays high-level system performance metrics by using the Performance Metrics Application Programming Interface (PMAPI). pmcollectl Collects and displays system-level data, either from a live system or from a Performance Co-Pilot archive file. pmconfig Displays the values of configuration parameters. pmdbg Displays available Performance Co-Pilot debug control flags and their values. pmdiff Compares the average values for every metric in either one or two archives, in a given time window, for changes that are likely to be of interest when searching for performance regressions. pmdumplog Displays control, metadata, index, and state information from a Performance Co-Pilot archive file. pmdumptext Outputs the values of performance metrics collected live or from a Performance Co-Pilot archive. pmerr Displays available Performance Co-Pilot error codes and their corresponding error messages. pmfind Finds PCP services on the network. pmie An inference engine that periodically evaluates a set of arithmetic, logical, and rule expressions. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file. pmieconf Displays or sets configurable pmie variables. pminfo Displays information about performance metrics. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file. pmiostat Reports I/O statistics for SCSI devices (by default) or device-mapper devices (with the -x dm option). pmlc Interactively configures active pmlogger instances. pmlogcheck Identifies invalid data in a Performance Co-Pilot archive file. pmlogconf Creates and modifies a pmlogger configuration file. pmloglabel Verifies, modifies, or repairs the label of a Performance Co-Pilot archive file. pmlogsummary Calculates statistical information about performance metrics stored in a Performance Co-Pilot archive file. pmprobe Determines the availability of performance metrics. pmrep Reports on selected, easily customizable, performance metrics values. pmsocks Allows access to a Performance Co-Pilot hosts through a firewall. pmstat Periodically displays a brief summary of system performance. pmstore Modifies the values of performance metrics. pmtrace Provides a command line interface to the trace Performance Metrics Domain Agent (PMDA). pmval Displays the current value of a performance metric. Table A.3. PCP Metric Groups for XFS Metric Group Metrics provided xfs.* General XFS metrics including the read and write operation counts, read and write byte counts. Along with counters for the number of times inodes are flushed, clustered and number of failure to cluster. xfs.allocs.* xfs.alloc_btree.* Range of metrics regarding the allocation of objects in the file system, these include number of extent and block creations/frees. Allocation tree lookup and compares along with extend record creation and deletion from the btree. xfs.block_map.* xfs.bmap_tree.* Metrics include the number of block map read/write and block deletions, extent list operations for insertion, deletions and lookups. Also operations counters for compares, lookups, insertions and deletion operations from the blockmap. xfs.dir_ops.* Counters for directory operations on XFS file systems for creation, entry deletions, count of "getdent" operations. xfs.transactions.* Counters for the number of meta-data transactions, these include the count for the number of synchronous and asynchronous transactions along with the number of empty transactions. xfs.inode_ops.* Counters for the number of times that the operating system looked for an XFS inode in the inode cache with different outcomes. These count cache hits, cache misses, and so on. xfs.log.* xfs.log_tail.* Counters for the number of log buffer writes over XFS file sytems includes the number of blocks written to disk. Metrics also for the number of log flushes and pinning. xfs.xstrat.* Counts for the number of bytes of file data flushed out by the XFS flush deamon along with counters for number of buffers flushed to contiguous and non-contiguous space on disk. xfs.attr.* Counts for the number of attribute get, set, remove and list operations over all XFS file systems. xfs.quota.* Metrics for quota operation over XFS file systems, these include counters for number of quota reclaims, quota cache misses, cache hits and quota data reclaims. xfs.buffer.* Range of metrics regarding XFS buffer objects. Counters include the number of requested buffer calls, successful buffer locks, waited buffer locks, miss_locks, miss_retries and buffer hits when looking up pages. xfs.btree.* Metrics regarding the operations of the XFS btree. xfs.control.reset Configuration metrics which are used to reset the metric counters for the XFS stats. Control metrics are toggled by means of the pmstore tool. Table A.4. PCP Metric Groups for XFS per Device Metric Group Metrics provided xfs.perdev.* General XFS metrics including the read and write operation counts, read and write byte counts. Along with counters for the number of times inodes are flushed, clustered and number of failure to cluster. xfs.perdev.allocs.* xfs.perdev.alloc_btree.* Range of metrics regarding the allocation of objects in the file system, these include number of extent and block creations/frees. Allocation tree lookup and compares along with extend record creation and deletion from the btree. xfs.perdev.block_map.* xfs.perdev.bmap_tree.* Metrics include the number of block map read/write and block deletions, extent list operations for insertion, deletions and lookups. Also operations counters for compares, lookups, insertions and deletion operations from the blockmap. xfs.perdev.dir_ops.* Counters for directory operations of XFS file systems for creation, entry deletions, count of "getdent" operations. xfs.perdev.transactions.* Counters for the number of meta-data transactions, these include the count for the number of synchronous and asynchronous transactions along with the number of empty transactions. xfs.perdev.inode_ops.* Counters for the number of times that the operating system looked for an XFS inode in the inode cache with different outcomes. These count cache hits, cache misses, and so on. xfs.perdev.log.* xfs.perdev.log_tail.* Counters for the number of log buffer writes over XFS filesytems includes the number of blocks written to disk. Metrics also for the number of log flushes and pinning. xfs.perdev.xstrat.* Counts for the number of bytes of file data flushed out by the XFS flush deamon along with counters for number of buffers flushed to contiguous and non-contiguous space on disk. xfs.perdev.attr.* Counts for the number of attribute get, set, remove and list operations over all XFS file systems. xfs.perdev.quota.* Metrics for quota operation over XFS file systems, these include counters for number of quota reclaims, quota cache misses, cache hits and quota data reclaims. xfs.perdev.buffer.* Range of metrics regarding XFS buffer objects. Counters include the number of requested buffer calls, successful buffer locks, waited buffer locks, miss_locks, miss_retries and buffer hits when looking up pages. xfs.perdev.btree.* Metrics regarding the operations of the XFS btree.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-performance_co_pilot_pcp
5.6. Resource Operations
5.6. Resource Operations To ensure that resources remain healthy, you can add a monitoring operation to a resource's definition. If you do not specify a monitoring operation for a resource, by default the pcs command will create a monitoring operation, with an interval that is determined by the resource agent. If the resource agent does not provide a default monitoring interval, the pcs command will create a monitoring operation with an interval of 60 seconds. Table 5.4, "Properties of an Operation" summarizes the properties of a resource monitoring operation. Table 5.4. Properties of an Operation Field Description id Unique name for the action. The system assigns this when you configure an operation. name The action to perform. Common values: monitor , start , stop interval How frequently (in seconds) to perform the operation. Default value: 0 , meaning never. timeout How long to wait before declaring the action has failed. If you find that your system includes a resource that takes a long time to start or stop or perform a non-recurring monitor action at startup, and requires more time than the system allows before declaring that the start action has failed, you can increase this value from the default of 20 or the value of timeout in "op defaults". on-fail The action to take if this action ever fails. Allowed values: * ignore - Pretend the resource did not fail * block - Do not perform any further operations on the resource * stop - Stop the resource and do not start it elsewhere * restart - Stop the resource and start it again (possibly on a different node) * fence - STONITH the node on which the resource failed * standby - Move all resources away from the node on which the resource failed The default for the stop operation is fence when STONITH is enabled and block otherwise. All other operations default to restart . enabled If false , the operation is treated as if it does not exist. Allowed values: true , false You can configure monitoring operations when you create a resource, using the following command. For example, the following command creates an IPaddr2 resource with a monitoring operation. The new resource is called VirtualIP with an IP address of 192.168.0.99 and a netmask of 24 on eth2 . A monitoring operation will be performed every 30 seconds. Alternately, you can add a monitoring operation to an existing resource with the following command. Use the following command to delete a configured resource operation. Note You must specify the exact operation properties to properly remove an existing operation. To change the values of a monitoring option, you remove the existing operation, then add the new operation. For example, you can create a VirtualIP with the following command. By default, this command creates these operations. The change the stop timeout operation, execute the following commands. To set global default values for monitoring operations, use the following command. For example, the following command sets a global default of a timeout value of 240s for all monitoring operations. To display the currently configured default values for monitoring operations, do not specify any options when you execute the pcs resource op defaults command. For example, following command displays the default monitoring operation values for a cluster which has been configured with a timeout value of 240s.
[ "pcs resource create resource_id standard:provider:type|type [ resource_options ] [op operation_action operation_options [ operation_type operation_options ]...]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2 op monitor interval=30s", "pcs resource op add resource_id operation_action [ operation_properties ]", "pcs resource op remove resource_id operation_name operation_properties", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2", "Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) stop interval=0s timeout=20s (VirtualIP-stop-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s)", "pcs resource op remove VirtualIP stop interval=0s timeout=20s pcs resource op add VirtualIP stop interval=0s timeout=40s pcs resource show VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)", "pcs resource op defaults [ options ]", "pcs resource op defaults timeout=240s", "pcs resource op defaults timeout: 240s" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-resourceoperate-haar
4.5. Testing the Resource Configuration
4.5. Testing the Resource Configuration If the Samba configuration was successful, you should be able to mount the Samba share on a node in the cluster. The following example procedure mounts a Samba share. Add an existing user in the cluster node to the smbpasswd file and assign a password. In the following example, there is an existing user smbuser . Mount the Samba share: Check whether the file system is mounted: To check for Samba recovery, perform the following procedure. Manually stop the CTDB resource with the following command: After you stop the resource, the system should recover the service. Check the cluster status with the pcs status command. You should see that the ctdb-clone resource has started, but you will also see a ctdb_monitor failure. To clear this error from the status, enter the following command on one of the cluster nodes:
[ "smbpasswd -a smbuser New SMB password: Retype new SMB password: Added user smbuser", "mkdir /mnt/sambashare mount -t cifs -o user=smbuser //198.162.1.151/public /mnt/sambashare Password for smbuser@//198.162.1.151/public: ********", "mount | grep /mnt/sambashare //198.162.1.151/public on /mnt/sambashare type cifs (rw,relatime,vers=1.0,cache=strict,username=smbuser,domain=LINUXSERVER,uid=0,noforceuid,gid=0,noforcegid,addr=10.37.167.205,unix,posixpaths,serverino,mapposix,acl,rsize=1048576,wsize=65536,echo_interval=60,actimeo=1)", "pcs resource debug-stop ctdb", "pcs status Clone Set: ctdb-clone [ctdb] Started: [ z1.example.com z2.example.com ] Failed Actions: * ctdb_monitor_10000 on z1.example.com 'unknown error' (1): call=126, status=complete, exitreason='CTDB status call failed: connect() failed, errno=111', last-rc-change='Thu Oct 19 18:39:51 2017', queued=0ms, exec=0ms", "pcs resource cleanup ctdb-clone" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/s1-unittestsamba-HAAA
Chapter 1. Knative Serving CLI commands
Chapter 1. Knative Serving CLI commands 1.1. kn service commands You can use the following commands to create and manage Knative services. 1.1.1. Creating serverless applications by using the Knative CLI Using the Knative ( kn ) CLI to create serverless applications provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service create command to create a basic serverless application. Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Knative service: USD kn service create <service-name> --image <image> --tag <tag-value> Where: --image is the URI of the image for the application. --tag is an optional flag that can be used to add a tag to the initial revision that is created with the service. Example command USD kn service create showcase \ --image quay.io/openshift-knative/showcase Example output Creating service 'showcase' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration "showcase" is waiting for a Revision to become ready. 3.857s ... 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'showcase' created with latest revision 'showcase-00001' and URL: http://showcase-default.apps-crc.testing 1.1.2. Updating serverless applications by using the Knative CLI You can use the kn service update command for interactive sessions on the command line as you build up a service incrementally. In contrast to the kn service apply command, when using the kn service update command you only have to specify the changes that you want to update, rather than the full configuration for the Knative service. Example commands Update a service by adding a new environment variable: USD kn service update <service_name> --env <key>=<value> Update a service by adding a new port: USD kn service update <service_name> --port 80 Update a service by adding new request and limit parameters: USD kn service update <service_name> --request cpu=500m --limit memory=1024Mi --limit cpu=1000m Assign the latest tag to a revision: USD kn service update <service_name> --tag <revision_name>=latest Update a tag from testing to staging for the latest READY revision of a service: USD kn service update <service_name> --untag testing --tag @latest=staging Add the test tag to a revision that receives 10% of traffic, and send the rest of the traffic to the latest READY revision of a service: USD kn service update <service_name> --tag <revision_name>=test --traffic test=10,@latest=90 1.1.3. Applying service declarations You can declaratively configure a Knative service by using the kn service apply command. If the service does not exist it is created, otherwise the existing service is updated with the options that have been changed. The kn service apply command is especially useful for shell scripts or in a continuous integration pipeline, where users typically want to fully specify the state of the service in a single command to declare the target state. When using kn service apply you must provide the full configuration for the Knative service. This is different from the kn service update command, which only requires you to specify in the command the options that you want to update. Example commands Create a service: USD kn service apply <service_name> --image <image> Add an environment variable to a service: USD kn service apply <service_name> --image <image> --env <key>=<value> Read the service declaration from a JSON or YAML file: USD kn service apply <service_name> -f <filename> 1.1.4. Describing serverless applications by using the Knative CLI You can describe a Knative service by using the kn service describe command. Example commands Describe a service: USD kn service describe --verbose <service_name> The --verbose flag is optional but can be included to provide a more detailed description. The difference between a regular and verbose output is shown in the following examples: Example output without --verbose flag Name: showcase Namespace: default Age: 2m URL: http://showcase-default.apps.ocp.example.com Revisions: 100% @latest (showcase-00001) [1] (2m) Image: quay.io/openshift-knative/showcase (pinned to aaea76) Conditions: OK TYPE AGE REASON ++ Ready 1m ++ ConfigurationsReady 1m ++ RoutesReady 1m Example output with --verbose flag Name: showcase Namespace: default Annotations: serving.knative.dev/creator=system:admin serving.knative.dev/lastModifier=system:admin Age: 3m URL: http://showcase-default.apps.ocp.example.com Cluster: http://showcase.default.svc.cluster.local Revisions: 100% @latest (showcase-00001) [1] (3m) Image: quay.io/openshift-knative/showcase (pinned to aaea76) Env: GREET=Bonjour Conditions: OK TYPE AGE REASON ++ Ready 3m ++ ConfigurationsReady 3m ++ RoutesReady 3m Describe a service in YAML format: USD kn service describe <service_name> -o yaml Describe a service in JSON format: USD kn service describe <service_name> -o json Print the service URL only: USD kn service describe <service_name> -o url 1.2. kn service commands in offline mode 1.2.1. About the Knative CLI offline mode When you execute kn service commands, the changes immediately propagate to the cluster. However, as an alternative, you can execute kn service commands in offline mode. When you create a service in offline mode, no changes happen on the cluster, and instead the service descriptor file is created on your local machine. Important The offline mode of the Knative CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . After the descriptor file is created, you can manually modify it and track it in a version control system. You can also propagate changes to the cluster by using the kn service create -f , kn service apply -f , or oc apply -f commands on the descriptor files. The offline mode has several uses: You can manually modify the descriptor file before using it to make changes on the cluster. You can locally track the descriptor file of a service in a version control system. This enables you to reuse the descriptor file in places other than the target cluster, for example in continuous integration (CI) pipelines, development environments, or demos. You can examine the created descriptor files to learn about Knative services. In particular, you can see how the resulting service is influenced by the different arguments passed to the kn command. The offline mode has its advantages: it is fast, and does not require a connection to the cluster. However, offline mode lacks server-side validation. Consequently, you cannot, for example, verify that the service name is unique or that the specified image can be pulled. 1.2.2. Creating a service using offline mode You can execute kn service commands in offline mode, so that no changes happen on the cluster, and instead the service descriptor file is created on your local machine. After the descriptor file is created, you can modify the file before propagating changes to the cluster. Important The offline mode of the Knative CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. Procedure In offline mode, create a local Knative service descriptor file: USD kn service create showcase \ --image quay.io/openshift-knative/showcase \ --target ./ \ --namespace test Example output Service 'showcase' created in namespace 'test'. The --target ./ flag enables offline mode and specifies ./ as the directory for storing the new directory tree. If you do not specify an existing directory, but use a filename, such as --target my-service.yaml , then no directory tree is created. Instead, only the service descriptor file my-service.yaml is created in the current directory. The filename can have the .yaml , .yml , or .json extension. Choosing .json creates the service descriptor file in the JSON format. The --namespace test option places the new service in the test namespace. If you do not use --namespace , and you are logged in to an OpenShift Container Platform cluster, the descriptor file is created in the current namespace. Otherwise, the descriptor file is created in the default namespace. Examine the created directory structure: USD tree ./ Example output ./ └── test └── ksvc └── showcase.yaml 2 directories, 1 file The current ./ directory specified with --target contains the new test/ directory that is named after the specified namespace. The test/ directory contains the ksvc directory, named after the resource type. The ksvc directory contains the descriptor file showcase.yaml , named according to the specified service name. Examine the generated service descriptor file: USD cat test/ksvc/showcase.yaml Example output apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: showcase namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/showcase creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/showcase name: "" resources: {} status: {} List information about the new service: USD kn service describe showcase --target ./ --namespace test Example output Name: showcase Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON The --target ./ option specifies the root directory for the directory structure containing namespace subdirectories. Alternatively, you can directly specify a YAML or JSON filename with the --target option. The accepted file extensions are .yaml , .yml , and .json . The --namespace option specifies the namespace, which communicates to kn the subdirectory that contains the necessary service descriptor file. If you do not use --namespace , and you are logged in to an OpenShift Container Platform cluster, kn searches for the service in the subdirectory that is named after the current namespace. Otherwise, kn searches in the default/ subdirectory. Use the service descriptor file to create the service on the cluster: USD kn service create -f test/ksvc/showcase.yaml Example output Creating service 'showcase' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s ... 0.168s Configuration "showcase" is waiting for a Revision to become ready. 23.377s ... 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'showcase' created to latest revision 'showcase-00001' is available at URL: http://showcase-test.apps.example.com 1.3. kn container commands You can use the following commands to create and manage multiple containers in a Knative service spec. 1.3.1. Knative client multi-container support You can use the kn container add command to print YAML container spec to standard output. This command is useful for multi-container use cases because it can be used along with other standard kn flags to create definitions. The kn container add command accepts all container-related flags that are supported for use with the kn service create command. The kn container add command can also be chained by using UNIX pipes ( | ) to create multiple container definitions at once. Example commands Add a container from an image and print it to standard output: USD kn container add <container_name> --image <image_uri> Example command USD kn container add sidecar --image docker.io/example/sidecar Example output containers: - image: docker.io/example/sidecar name: sidecar resources: {} Chain two kn container add commands together, and then pass them to a kn service create command to create a Knative service with two containers: USD kn container add <first_container_name> --image <image_uri> | \ kn container add <second_container_name> --image <image_uri> | \ kn service create <service_name> --image <image_uri> --extra-containers - --extra-containers - specifies a special case where kn reads the pipe input instead of a YAML file. Example command USD kn container add sidecar --image docker.io/example/sidecar:first | \ kn container add second --image docker.io/example/sidecar:second | \ kn service create my-service --image docker.io/example/my-app:latest --extra-containers - The --extra-containers flag can also accept a path to a YAML file: USD kn service create <service_name> --image <image_uri> --extra-containers <filename> Example command USD kn service create my-service --image docker.io/example/my-app:latest --extra-containers my-extra-containers.yaml 1.4. kn domain commands You can use the following commands to create and manage domain mappings. 1.4.1. Creating a custom domain mapping by using the Knative CLI Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have created a Knative service or route, and control a custom domain that you want to map to that CR. Note Your custom domain must point to the DNS of the OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Map a domain to a CR in the current namespace: USD kn domain create <domain_mapping_name> --ref <target_name> Example command USD kn domain create example.com --ref showcase The --ref flag specifies an Addressable target CR for domain mapping. If a prefix is not provided when using the --ref flag, it is assumed that the target is a Knative service in the current namespace. Map a domain to a Knative service in a specified namespace: USD kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace> Example command USD kn domain create example.com --ref ksvc:showcase:example-namespace Map a domain to a Knative route: USD kn domain create <domain_mapping_name> --ref <kroute:route_name> Example command USD kn domain create example.com --ref kroute:example-route 1.4.2. Managing custom domain mappings by using the Knative CLI After you have created a DomainMapping custom resource (CR), you can list existing CRs, view information about an existing CR, update CRs, or delete CRs by using the Knative ( kn ) CLI. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have created at least one DomainMapping CR. You have installed the Knative ( kn ) CLI tool. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure List existing DomainMapping CRs: USD kn domain list -n <domain_mapping_namespace> View details of an existing DomainMapping CR: USD kn domain describe <domain_mapping_name> Update a DomainMapping CR to point to a new target: USD kn domain update --ref <target> Delete a DomainMapping CR: USD kn domain delete <domain_mapping_name>
[ "kn service create <service-name> --image <image> --tag <tag-value>", "kn service create showcase --image quay.io/openshift-knative/showcase", "Creating service 'showcase' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration \"showcase\" is waiting for a Revision to become ready. 3.857s 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'showcase' created with latest revision 'showcase-00001' and URL: http://showcase-default.apps-crc.testing", "kn service update <service_name> --env <key>=<value>", "kn service update <service_name> --port 80", "kn service update <service_name> --request cpu=500m --limit memory=1024Mi --limit cpu=1000m", "kn service update <service_name> --tag <revision_name>=latest", "kn service update <service_name> --untag testing --tag @latest=staging", "kn service update <service_name> --tag <revision_name>=test --traffic test=10,@latest=90", "kn service apply <service_name> --image <image>", "kn service apply <service_name> --image <image> --env <key>=<value>", "kn service apply <service_name> -f <filename>", "kn service describe --verbose <service_name>", "Name: showcase Namespace: default Age: 2m URL: http://showcase-default.apps.ocp.example.com Revisions: 100% @latest (showcase-00001) [1] (2m) Image: quay.io/openshift-knative/showcase (pinned to aaea76) Conditions: OK TYPE AGE REASON ++ Ready 1m ++ ConfigurationsReady 1m ++ RoutesReady 1m", "Name: showcase Namespace: default Annotations: serving.knative.dev/creator=system:admin serving.knative.dev/lastModifier=system:admin Age: 3m URL: http://showcase-default.apps.ocp.example.com Cluster: http://showcase.default.svc.cluster.local Revisions: 100% @latest (showcase-00001) [1] (3m) Image: quay.io/openshift-knative/showcase (pinned to aaea76) Env: GREET=Bonjour Conditions: OK TYPE AGE REASON ++ Ready 3m ++ ConfigurationsReady 3m ++ RoutesReady 3m", "kn service describe <service_name> -o yaml", "kn service describe <service_name> -o json", "kn service describe <service_name> -o url", "kn service create showcase --image quay.io/openshift-knative/showcase --target ./ --namespace test", "Service 'showcase' created in namespace 'test'.", "tree ./", "./ └── test └── ksvc └── showcase.yaml 2 directories, 1 file", "cat test/ksvc/showcase.yaml", "apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: showcase namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/showcase creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/showcase name: \"\" resources: {} status: {}", "kn service describe showcase --target ./ --namespace test", "Name: showcase Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON", "kn service create -f test/ksvc/showcase.yaml", "Creating service 'showcase' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s 0.168s Configuration \"showcase\" is waiting for a Revision to become ready. 23.377s 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'showcase' created to latest revision 'showcase-00001' is available at URL: http://showcase-test.apps.example.com", "kn container add <container_name> --image <image_uri>", "kn container add sidecar --image docker.io/example/sidecar", "containers: - image: docker.io/example/sidecar name: sidecar resources: {}", "kn container add <first_container_name> --image <image_uri> | kn container add <second_container_name> --image <image_uri> | kn service create <service_name> --image <image_uri> --extra-containers -", "kn container add sidecar --image docker.io/example/sidecar:first | kn container add second --image docker.io/example/sidecar:second | kn service create my-service --image docker.io/example/my-app:latest --extra-containers -", "kn service create <service_name> --image <image_uri> --extra-containers <filename>", "kn service create my-service --image docker.io/example/my-app:latest --extra-containers my-extra-containers.yaml", "kn domain create <domain_mapping_name> --ref <target_name>", "kn domain create example.com --ref showcase", "kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace>", "kn domain create example.com --ref ksvc:showcase:example-namespace", "kn domain create <domain_mapping_name> --ref <kroute:route_name>", "kn domain create example.com --ref kroute:example-route", "kn domain list -n <domain_mapping_namespace>", "kn domain describe <domain_mapping_name>", "kn domain update --ref <target>", "kn domain delete <domain_mapping_name>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/knative_cli/knative-serving-cli-commands
Chapter 2. Configuring the upgrade strategy for OpenShift AI
Chapter 2. Configuring the upgrade strategy for OpenShift AI As a cluster administrator, you can configure either an automatic or manual upgrade strategy for the Red Hat OpenShift AI Operator. Important By default, the Red Hat OpenShift AI Operator follows a sequential update process. This means that if there are several versions between the current version and the version that you intend to upgrade to, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the intermediate versions before it upgrades it to the final, target version. If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version, without human intervention. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version. For information about supported versions, see the Red Hat OpenShift AI Life Cycle Knowledgebase article. Prerequisites You have cluster administrator privileges for your OpenShift cluster. The Red Hat OpenShift AI Operator is installed. Procedure Log in to the OpenShift cluster web console as a cluster administrator. In the Administrator perspective, in the left menu, select Operators Installed Operators . Click the Red Hat OpenShift AI Operator. Click the Subscription tab. Under Update approval , click the pencil icon and select one of the following update strategies: Automatic : New updates are installed as soon as they become available. Manual : A cluster administrator must approve any new update before installation begins. Click Save . Additional resources For more information about upgrading Operators that have been installed by using OLM, see Updating installed Operators in OpenShift Dedicated or Updating installed Operators in Red Hat OpenShift Service on AWS (ROSA)
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/upgrading_openshift_ai_cloud_service/configuring-the-upgrade-strategy-for-openshift-ai_upgrade
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/package_manifest/proc_providing-feedback-on-red-hat-documentation_package-manifest
Chapter 10. Using Pluggable Authentication Modules (PAM)
Chapter 10. Using Pluggable Authentication Modules (PAM) Pluggable authentication modules (PAMs) are a common framework for authentication and authorization. Most system applications in Red Hat Enterprise Linux depend on underlying PAM configuration for authentication and authorization. 10.1. About PAM Pluggable Authentication Modules (PAMs) provide a centralized authentication mechanism which system application can use to relay authentication to a centrally configured framework. PAM is pluggable because there is a PAM module for different types of authentication sources (such as Kerberos, SSSD, NIS, or the local file system). Different authentication sources can be prioritized. This modular architecture offers administrators a great deal of flexibility in setting authentication policies for the system. PAM is a useful system for developers and administrators for several reasons: PAM provides a common authentication scheme that can be used with a wide variety of applications. PAM provides significant flexibility and control over authentication for system administrators. PAM provides a single, fully-documented library which allows developers to write programs without having to create their own authentication schemes. 10.1.1. Other PAM Resources PAM has an extensive documentation set with much more detail about both using PAM and writing modules to extend or integrate PAM with other applications. Almost all of the major modules and configuration files with PAM have their own man pages. Additionally, the /usr/share/doc/pam- version# / directory contains a System Administrators' Guide , a Module Writers' Manual , and the Application Developers' Manual , as well as a copy of the PAM standard, DCE-RFC 86.0. The libraries for PAM are available at http://www.linux-pam.org . This is the primary distribution website for the Linux-PAM project, containing information on various PAM modules, frequently asked questions, and additional PAM documentation. 10.1.2. Custom PAM Modules New PAM modules can be created or added at any time for use by PAM-aware applications. PAM-aware programs can immediately use the new module and any methods it defines without being recompiled or otherwise modified. This allows developers and system administrators to use a selection of authentication modules, as well as tests, for different programs without recompiling them. Documentation on writing modules is included in the /usr/share/doc/pam-devel- version# / directory.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/pluggable_authentication_modules
10.6. FS-Cache References
10.6. FS-Cache References For more information on cachefilesd and how to configure it, see man cachefilesd and man cachefilesd.conf . The following kernel documents also provide additional information: /usr/share/doc/cachefilesd- version-number /README /usr/share/man/man5/cachefilesd.conf.5.gz /usr/share/man/man8/cachefilesd.8.gz For general information about FS-Cache, including details on its design constraints, available statistics, and capabilities, see the following kernel document: /usr/share/doc/kernel-doc- version /Documentation/filesystems/caching/fscache.txt
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/fscachemorinfo
Chapter 2. Recommended performance and scalability practices
Chapter 2. Recommended performance and scalability practices 2.1. Recommended control plane practices This topic provides recommended performance and scalability practices for control planes in OpenShift Container Platform. 2.1.1. Recommended practices for scaling the cluster The guidance in this section is only relevant for installations with cloud provider integration. Apply the following best practices to scale the number of worker machines in your OpenShift Container Platform cluster. You scale the worker machines by increasing or decreasing the number of replicas that are defined in the worker machine set. When scaling up the cluster to higher node counts: Spread nodes across all of the available zones for higher availability. Scale up by no more than 25 to 50 machines at once. Consider creating new compute machine sets in each available zone with alternative instance types of similar size to help mitigate any periodic provider capacity constraints. For example, on AWS, use m5.large and m5d.large. Note Cloud providers might implement a quota for API services. Therefore, gradually scale the cluster. The controller might not be able to create the machines if the replicas in the compute machine sets are set to higher numbers all at one time. The number of requests the cloud platform, which OpenShift Container Platform is deployed on top of, is able to handle impacts the process. The controller will start to query more while trying to create, check, and update the machines with the status. The cloud platform on which OpenShift Container Platform is deployed has API request limits; excessive queries might lead to machine creation failures due to cloud platform limitations. Enable machine health checks when scaling to large node counts. In case of failures, the health checks monitor the condition and automatically repair unhealthy machines. Note When scaling large and dense clusters to lower node counts, it might take large amounts of time because the process involves draining or evicting the objects running on the nodes being terminated in parallel. Also, the client might start to throttle the requests if there are too many objects to evict. The default client queries per second (QPS) and burst rates are currently set to 50 and 100 respectively. These values cannot be modified in OpenShift Container Platform. 2.1.2. Control plane node sizing The control plane node resource requirements depend on the number and type of nodes and objects in the cluster. The following control plane node size recommendations are based on the results of a control plane density focused testing, or Cluster-density . This test creates the following objects across a given number of namespaces: 1 image stream 1 build 5 deployments, with 2 pod replicas in a sleep state, mounting 4 secrets, 4 config maps, and 1 downward API volume each 5 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the deployments 1 route pointing to the first of the services 10 secrets containing 2048 random string characters 10 config maps containing 2048 random string characters Number of worker nodes Cluster-density (namespaces) CPU cores Memory (GB) 24 500 4 16 120 1000 8 32 252 4000 16, but 24 if using the OVN-Kubernetes network plug-in 64, but 128 if using the OVN-Kubernetes network plug-in 501, but untested with the OVN-Kubernetes network plug-in 4000 16 96 The data from the table above is based on an OpenShift Container Platform running on top of AWS, using r5.4xlarge instances as control-plane nodes and m5.2xlarge instances as worker nodes. On a large and dense cluster with three control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted, or fails. The failures can be due to unexpected issues with power, network, underlying infrastructure, or intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available, which leads to increase in the resource usage. This is also expected during upgrades because the control plane nodes are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources. Important The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the running phase. Operator Lifecycle Manager (OLM ) runs on the control plane nodes and its memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing. Number of namespaces OLM memory at idle state (GB) OLM memory with 5 user operators installed (GB) 500 0.823 1.7 1000 1.2 2.5 1500 1.7 3.2 2000 2 4.4 3000 2.7 5.6 4000 3.8 7.6 5000 4.2 9.02 6000 5.8 11.3 7000 6.6 12.9 8000 6.9 14.8 9000 8 17.7 10,000 9.9 21.6 Important You can modify the control plane node size in a running OpenShift Container Platform 4.16 cluster for the following configurations only: Clusters installed with a user-provisioned installation method. AWS clusters installed with an installer-provisioned infrastructure installation method. Clusters that use a control plane machine set to manage control plane machines. For all other configurations, you must estimate your total node count and use the suggested control plane node size during installation. Important The recommendations are based on the data points captured on OpenShift Container Platform clusters with OpenShift SDN as the network plugin. Note In OpenShift Container Platform 4.16, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. The sizes are determined taking that into consideration. 2.1.2.1. Selecting a larger Amazon Web Services instance type for control plane machines If the control plane machines in an Amazon Web Services (AWS) cluster require more resources, you can select a larger AWS instance type for the control plane machines to use. Note The procedure for clusters that use a control plane machine set is different from the procedure for clusters that do not use a control plane machine set. If you are uncertain about the state of the ControlPlaneMachineSet CR in your cluster, you can verify the CR status . 2.1.2.1.1. Changing the Amazon Web Services instance type by using a control plane machine set You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the specification in the control plane machine set custom resource (CR). Prerequisites Your AWS cluster uses a control plane machine set. Procedure Edit your control plane machine set CR by running the following command: USD oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster Edit the following line under the providerSpec field: providerSpec: value: ... instanceType: <compatible_aws_instance_type> 1 1 Specify a larger AWS instance type with the same base as the selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . Save your changes. For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. Additional resources Managing control plane machines with control plane machine sets 2.1.2.1.2. Changing the Amazon Web Services instance type by using the AWS console You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the instance type in the AWS console. Prerequisites You have access to the AWS console with the permissions required to modify the EC2 Instance for your cluster. You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. Procedure Open the AWS console and fetch the instances for the control plane machines. Choose one control plane machine instance. For the selected control plane machine, back up the etcd data by creating an etcd snapshot. For more information, see "Backing up etcd". In the AWS console, stop the control plane machine instance. Select the stopped instance, and click Actions Instance Settings Change instance type . Change the instance to a larger type, ensuring that the type is the same base as the selection, and apply changes. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . Start the instance. If your OpenShift Container Platform cluster has a corresponding Machine object for the instance, update the instance type of the object to match the instance type set in the AWS console. Repeat this process for each control plane machine. Additional resources Backing up etcd AWS documentation about changing the instance type 2.2. Recommended infrastructure practices This topic provides recommended performance and scalability practices for infrastructure in OpenShift Container Platform. 2.2.1. Infrastructure node sizing Infrastructure nodes are nodes that are labeled to run pieces of the OpenShift Container Platform environment. The infrastructure node resource requirements depend on the cluster age, nodes, and objects in the cluster, as these factors can lead to an increase in the number of metrics or time series in Prometheus. The following infrastructure node size recommendations are based on the results observed in cluster-density testing detailed in the Control plane node sizing section, where the monitoring stack and the default ingress-controller were moved to these nodes. Number of worker nodes Cluster density, or number of namespaces CPU cores Memory (GB) 27 500 4 24 120 1000 8 48 252 4000 16 128 501 4000 32 128 In general, three infrastructure nodes are recommended per cluster. Important These sizing recommendations should be used as a guideline. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. In addition, the router resource usage can also be affected by the number of routes and the amount/type of inbound requests. These recommendations apply only to infrastructure nodes hosting Monitoring, Ingress and Registry infrastructure components installed during cluster creation. Note In OpenShift Container Platform 4.16, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. This influences the stated sizing recommendations. 2.2.2. Scaling the Cluster Monitoring Operator OpenShift Container Platform exposes metrics that the Cluster Monitoring Operator (CMO) collects and stores in the Prometheus-based monitoring stack. As an administrator, you can view dashboards for system resources, containers, and components metrics in the OpenShift Container Platform web console by navigating to Observe Dashboards . 2.2.3. Prometheus database storage requirements Red Hat performed various tests for different scale sizes. Note The following Prometheus storage requirements are not prescriptive and should be used as a reference. Higher resource consumption might be observed in your cluster depending on workload activity and resource density, including the number of pods, containers, routes, or other resources exposing metrics collected by Prometheus. You can configure the size-based data retention policy to suit your storage requirements. Table 2.1. Prometheus Database storage requirements based on number of nodes/pods in the cluster Number of nodes Number of pods (2 containers per pod) Prometheus storage growth per day Prometheus storage growth per 15 days Network (per tsdb chunk) 50 1800 6.3 GB 94 GB 16 MB 100 3600 13 GB 195 GB 26 MB 150 5400 19 GB 283 GB 36 MB 200 7200 25 GB 375 GB 46 MB Approximately 20 percent of the expected size was added as overhead to ensure that the storage requirements do not exceed the calculated value. The above calculation is for the default OpenShift Container Platform Cluster Monitoring Operator. Note CPU utilization has minor impact. The ratio is approximately 1 core out of 40 per 50 nodes and 1800 pods. Recommendations for OpenShift Container Platform Use at least two infrastructure (infra) nodes. Use at least three openshift-container-storage nodes with non-volatile memory express (SSD or NVMe) drives. 2.2.4. Configuring cluster monitoring You can increase the storage capacity for the Prometheus component in the cluster monitoring stack. Procedure To increase the storage capacity for Prometheus: Create a YAML configuration file, cluster-monitoring-config.yaml . For example: apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: "" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: "" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring 1 The default value of Prometheus retention is PROMETHEUS_RETENTION_PERIOD=15d . Units are measured in time using one of these suffixes: s, m, h, d. 2 4 The storage class for your cluster. 3 A typical value is PROMETHEUS_STORAGE_SIZE=2000Gi . Storage values can be a plain integer or a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. 5 A typical value is ALERTMANAGER_STORAGE_SIZE=20Gi . Storage values can be a plain integer or a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. Add values for the retention period, storage class, and storage sizes. Save the file. Apply the changes by running: USD oc create -f cluster-monitoring-config.yaml 2.2.5. Additional resources Infrastructure Nodes in OpenShift 4 OpenShift Container Platform cluster maximums Creating infrastructure machine sets 2.3. Recommended etcd practices This topic provides recommended performance and scalability practices for etcd in OpenShift Container Platform. 2.3.1. Recommended etcd practices Because etcd writes data to disk and persists proposals on disk, its performance depends on disk performance. Although etcd is not particularly I/O intensive, it requires a low latency block device for optimal performance and stability. Because etcd's consensus protocol depends on persistently storing metadata to a log (WAL), etcd is sensitive to disk-write latency. Slow disks and disk activity from other processes can cause long fsync latencies. Those latencies can cause etcd to miss heartbeats, not commit new proposals to the disk on time, and ultimately experience request timeouts and temporary leader loss. High write latencies also lead to an OpenShift API slowness, which affects cluster performance. Because of these reasons, avoid colocating other workloads on the control-plane nodes that are I/O sensitive or intensive and share the same underlying I/O infrastructure. In terms of latency, run etcd on top of a block device that can write at least 50 IOPS of 8000 bytes long sequentially. That is, with a latency of 10ms, keep in mind that uses fdatasync to synchronize each write in the WAL. For heavy loaded clusters, sequential 500 IOPS of 8000 bytes (2 ms) are recommended. To measure those numbers, you can use a benchmarking tool, such as fio. To achieve such performance, run etcd on machines that are backed by SSD or NVMe disks with low latency and high throughput. Consider single-level cell (SLC) solid-state drives (SSDs), which provide 1 bit per memory cell, are durable and reliable, and are ideal for write-intensive workloads. Note The load on etcd arises from static factors, such as the number of nodes and pods, and dynamic factors, including changes in endpoints due to pod autoscaling, pod restarts, job executions, and other workload-related events. To accurately size your etcd setup, you must analyze the specific requirements of your workload. Consider the number of nodes, pods, and other relevant factors that impact the load on etcd. The following hard drive practices provide optimal etcd performance: Use dedicated etcd drives. Avoid drives that communicate over the network, such as iSCSI. Do not place log files or other heavy workloads on etcd drives. Prefer drives with low latency to support fast read and write operations. Prefer high-bandwidth writes for faster compactions and defragmentation. Prefer high-bandwidth reads for faster recovery from failures. Use solid state drives as a minimum selection. Prefer NVMe drives for production environments. Use server-grade hardware for increased reliability. Note Avoid NAS or SAN setups and spinning drives. Ceph Rados Block Device (RBD) and other types of network-attached storage can result in unpredictable network latency. To provide fast storage to etcd nodes at scale, use PCI passthrough to pass NVM devices directly to the nodes. Always benchmark by using utilities such as fio. You can use such utilities to continuously monitor the cluster performance as it increases. Note Avoid using the Network File System (NFS) protocol or other network based file systems. Some key metrics to monitor on a deployed OpenShift Container Platform cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics. Note The etcd member database sizes can vary in a cluster during normal operations. This difference does not affect cluster upgrades, even if the leader size is different from the other members. To validate the hardware for etcd before or after you create the OpenShift Container Platform cluster, you can use fio. Prerequisites Container runtimes such as Podman or Docker are installed on the machine that you are testing. Data is written to the /var/lib/etcd path. Procedure Run fio and analyze the results: If you use Podman, run this command: USD sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf If you use Docker, run this command: USD sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 10 ms. A few of the most important etcd metrics that might affected by I/O performance are as follow: etcd_disk_wal_fsync_duration_seconds_bucket metric reports the etcd's WAL fsync duration etcd_disk_backend_commit_duration_seconds_bucket metric reports the etcd backend commit latency duration etcd_server_leader_changes_seen_total metric reports the leader changes Because etcd replicates the requests among all the members, its performance strongly depends on network input/output (I/O) latency. High network latencies result in etcd heartbeats taking longer than the election timeout, which results in leader elections that are disruptive to the cluster. A key metric to monitor on a deployed OpenShift Container Platform cluster is the 99th percentile of etcd network peer latency on each etcd cluster member. Use Prometheus to track the metric. The histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[2m])) metric reports the round trip time for etcd to finish replicating the client requests between the members. Ensure that it is less than 50 ms. Additional resources How to use fio to check etcd disk performance in OpenShift Container Platform etcd performance troubleshooting guide for OpenShift Container Platform 2.3.2. Moving etcd to a different disk You can move etcd from a shared disk to a separate disk to prevent or resolve performance issues. The Machine Config Operator (MCO) is responsible for mounting a secondary disk for OpenShift Container Platform 4.16 container storage. Note This encoded script only supports device names for the following device types: SCSI or SATA /dev/sd* Virtual device /dev/vd* NVMe /dev/nvme*[0-9]*n* Limitations When the new disk is attached to the cluster, the etcd database is part of the root mount. It is not part of the secondary disk or the intended disk when the primary node is recreated. As a result, the primary node will not create a separate /var/lib/etcd mount. Prerequisites You have a backup of your cluster's etcd data. You have installed the OpenShift CLI ( oc ). You have access to the cluster with cluster-admin privileges. Add additional disks before uploading the machine configuration. The MachineConfigPool must match metadata.labels[machineconfiguration.openshift.io/role] . This applies to a controller, worker, or a custom pool. Note This procedure does not move parts of the root file system, such as /var/ , to another disk or partition on an installed node. Important This procedure is not supported when using control plane machine sets. Procedure Attach the new disk to the cluster and verify that the disk is detected in the node by running the lsblk command in a debug shell: USD oc debug node/<node_name> # lsblk Note the device name of the new disk reported by the lsblk command. Create the following script and name it etcd-find-secondary-device.sh : #!/bin/bash set -uo pipefail for device in <device_type_glob>; do 1 /usr/sbin/blkid "USD{device}" &> /dev/null if [ USD? == 2 ]; then echo "secondary device found USD{device}" echo "creating filesystem for etcd mount" mkfs.xfs -L var-lib-etcd -f "USD{device}" &> /dev/null udevadm settle touch /etc/var-lib-etcd-mount exit fi done echo "Couldn't find secondary block device!" >&2 exit 77 1 Replace <device_type_glob> with a shell glob for your block device type. For SCSI or SATA drives, use /dev/sd* ; for virtual drives, use /dev/vd* ; for NVMe drives, use /dev/nvme*[0-9]*n* . Create a base64-encoded string from the etcd-find-secondary-device.sh script and note its contents: USD base64 -w0 etcd-find-secondary-device.sh Create a MachineConfig YAML file named etcd-mc.yml with contents such as the following: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 storage: files: - path: /etc/find-secondary-device mode: 0755 contents: source: data:text/plain;charset=utf-8;base64,<encoded_etcd_find_secondary_device_script> 1 systemd: units: - name: find-secondary-device.service enabled: true contents: | [Unit] Description=Find secondary device DefaultDependencies=false After=systemd-udev-settle.service Before=local-fs-pre.target ConditionPathExists=!/etc/var-lib-etcd-mount [Service] RemainAfterExit=yes ExecStart=/etc/find-secondary-device RestartForceExitStatus=77 [Install] WantedBy=multi-user.target - name: var-lib-etcd.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-label/var-lib-etcd Where=/var/lib/etcd Type=xfs TimeoutSec=120s [Install] RequiredBy=local-fs.target - name: sync-var-lib-etcd-to-etcd.service enabled: true contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=/usr/sbin/setsebool -P rsync_full_access 1 ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/usr/sbin/semanage fcontext -a -t container_var_lib_t '/var/lib/etcd(/.*)?' ExecStart=/usr/sbin/setsebool -P rsync_full_access 0 TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target - name: restorecon-var-lib-etcd.service enabled: true contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target 1 Replace <encoded_etcd_find_secondary_device_script> with the encoded script contents that you noted. Verification steps Run the grep /var/lib/etcd /proc/mounts command in a debug shell for the node to ensure that the disk is mounted: USD oc debug node/<node_name> # grep -w "/var/lib/etcd" /proc/mounts Example output /dev/sdb /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0 Additional resources Red Hat Enterprise Linux CoreOS (RHCOS) 2.3.3. Defragmenting etcd data For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes. Monitor these key metrics: etcd_server_quota_backend_bytes , which is the current quota limit etcd_mvcc_db_total_size_in_use_in_bytes , which indicates the actual database usage after a history compaction etcd_mvcc_db_total_size_in_bytes , which shows the database size, including free space waiting for defragmentation Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction. History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system. Defragmentation occurs automatically, but you can also trigger it manually. Note Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user. 2.3.3.1. Automatic defragmentation The etcd Operator automatically defragments disks. No manual intervention is needed. Verify that the defragmentation process is successful by viewing one of these logs: etcd logs cluster-etcd-operator pod operator status error log Warning Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the running instance or the component resumes work again after the restart. Example log output for successful defragmentation etcd member has been defragmented: <member_name> , memberID: <member_id> Example log output for unsuccessful defragmentation failed defrag on member: <member_name> , memberID: <member_id> : <error_message> 2.3.3.2. Manual defragmentation A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases: When etcd uses more than 50% of its available space for more than 10 minutes When etcd is actively using less than 50% of its total database size for more than 10 minutes You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024 Warning Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover. Follow this procedure to defragment etcd data on each etcd member. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Determine which etcd member is the leader, because the leader should be defragmented last. Get the list of etcd pods: USD oc -n openshift-etcd get pods -l k8s-app=etcd -o wide Example output etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none> Choose a pod and run the following command to determine which etcd member is the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table Example output Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com . Defragment an etcd member. Connect to the running etcd container, passing in the name of a pod that is not the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com Unset the ETCDCTL_ENDPOINTS environment variable: sh-4.4# unset ETCDCTL_ENDPOINTS Defragment the etcd member: sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag Example output Finished defragmenting etcd member[https://localhost:2379] If a timeout error occurs, increase the value for --command-timeout until the command succeeds. Verify that the database size was reduced: sh-4.4# etcdctl endpoint status -w table --cluster Example output +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last. Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them. Check if there are any NOSPACE alarms: sh-4.4# etcdctl alarm list Example output memberID:12345678912345678912 alarm:NOSPACE Clear the alarms: sh-4.4# etcdctl alarm disarm 2.3.4. Setting tuning parameters for etcd You can set the control plane hardware speed to "Standard" , "Slower" , or the default, which is "" . The default setting allows the system to decide which speed to use. This value enables upgrades from versions where this feature does not exist, as the system can select values from versions. By selecting one of the other values, you are overriding the default. If you see many leader elections due to timeouts or missed heartbeats and your system is set to "" or "Standard" , set the hardware speed to "Slower" to make the system more tolerant to the increased latency. 2.3.4.1. Changing hardware speed tolerance To change the hardware speed tolerance for etcd, complete the following steps. Procedure Check to see what the current value is by entering the following command: USD oc describe etcd/cluster | grep "Control Plane Hardware Speed" Example output Control Plane Hardware Speed: <VALUE> Note If the output is empty, the field has not been set and should be considered as the default (""). Change the value by entering the following command. Replace <value> with one of the valid values: "" , "Standard" , or "Slower" : USD oc patch etcd/cluster --type=merge -p '{"spec": {"controlPlaneHardwareSpeed": "<value>"}}' The following table indicates the heartbeat interval and leader election timeout for each profile. These values are subject to change. Profile ETCD_HEARTBEAT_INTERVAL ETCD_LEADER_ELECTION_TIMEOUT "" Varies depending on platform Varies depending on platform Standard 100 1000 Slower 500 2500 Review the output: Example output etcd.operator.openshift.io/cluster patched If you enter any value besides the valid values, error output is displayed. For example, if you entered "Faster" as the value, the output is as follows: Example output The Etcd "cluster" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value: "Faster": supported values: "", "Standard", "Slower" Verify that the value was changed by entering the following command: USD oc describe etcd/cluster | grep "Control Plane Hardware Speed" Example output Control Plane Hardware Speed: "" Wait for etcd pods to roll out: USD oc get pods -n openshift-etcd -w The following output shows the expected entries for master-0. Before you continue, wait until all masters show a status of 4/4 Running . Example output installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 1s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 2s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 34s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Pending 0 0s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:1/3 0 1s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:2/3 0 2s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 PodInitializing 0 3s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 4s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 20s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Running 0 20s Enter the following command to review to the values: USD oc describe -n openshift-etcd pod/<ETCD_PODNAME> | grep -e HEARTBEAT_INTERVAL -e ELECTION_TIMEOUT Note These values might not have changed from the default. Additional resources Understanding feature gates 2.3.5. Increasing the database size for etcd You can set the disk quota in gibibytes (GiB) for each etcd instance. If you set a disk quota for your etcd instance, you can specify integer values from 8 to 32. The default value is 8. You can specify only increasing values. You might want to increase the disk quota if you encounter a low space alert. This alert indicates that the cluster is too large to fit in etcd despite automatic compaction and defragmentation. If you see this alert, you need to increase the disk quota immediately because after etcd runs out of space, writes fail. Another scenario where you might want to increase the disk quota is if you encounter an excessive database growth alert. This alert is a warning that the database might grow too large in the four hours. In this scenario, consider increasing the disk quota so that you do not eventually encounter a low space alert and possible write fails. If you increase the disk quota, the disk space that you specify is not immediately reserved. Instead, etcd can grow to that size if needed. Ensure that etcd is running on a dedicated disk that is larger than the value that you specify for the disk quota. For large etcd databases, the control plane nodes must have additional memory and storage. Because you must account for the API server cache, the minimum memory required is at least three times the configured size of the etcd database. Important Increasing the database size for etcd is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.3.5.1. Changing the etcd database size To change the database size for etcd, complete the following steps. Procedure Check the current value of the disk quota for each etcd instance by entering the following command: USD oc describe etcd/cluster | grep "Backend Quota" Example output Backend Quota Gi B: <value> Change the value of the disk quota by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": <value>}}' Example output etcd.operator.openshift.io/cluster patched Verification Verify that the new value for the disk quota is set by entering the following command: USD oc describe etcd/cluster | grep "Backend Quota" The etcd Operator automatically rolls out the etcd instances with the new values. Verify that the etcd pods are up and running by entering the following command: USD oc get pods -n openshift-etcd The following output shows the expected entries. Example output NAME READY STATUS RESTARTS AGE etcd-ci-ln-b6kfsw2-72292-mzwbq-master-0 4/4 Running 0 39m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-1 4/4 Running 0 37m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-2 4/4 Running 0 41m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-0 1/1 Running 0 51m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-1 1/1 Running 0 49m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-2 1/1 Running 0 54m installer-5-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 51m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 46m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 44m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 49m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 40m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 38m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 43m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m Verify that the disk quota value is updated for the etcd pod by entering the following command: USD oc describe -n openshift-etcd pod/<etcd_podname> | grep "ETCD_QUOTA_BACKEND_BYTES" The value might not have changed from the default value of 8 . Example output ETCD_QUOTA_BACKEND_BYTES: 8589934592 Note While the value that you set is an integer in GiB, the value shown in the output is converted to bytes. 2.3.5.2. Troubleshooting If you encounter issues when you try to increase the database size for etcd, the following troubleshooting steps might help. 2.3.5.2.1. Value is too small If the value that you specify is less than 8 , you see the following error message: USD oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 5}}' Example error message The Etcd "cluster" is invalid: * spec.backendQuotaGiB: Invalid value: 5: spec.backendQuotaGiB in body should be greater than or equal to 8 * spec.backendQuotaGiB: Invalid value: "integer": etcd backendQuotaGiB may not be decreased To resolve this issue, specify an integer between 8 and 32 . 2.3.5.2.2. Value is too large If the value that you specify is greater than 32 , you see the following error message: USD oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 64}}' Example error message The Etcd "cluster" is invalid: spec.backendQuotaGiB: Invalid value: 64: spec.backendQuotaGiB in body should be less than or equal to 32 To resolve this issue, specify an integer between 8 and 32 . 2.3.5.2.3. Value is decreasing If the value is set to a valid value between 8 and 32 , you cannot decrease the value. Otherwise, you see an error message. Check to see the current value by entering the following command: USD oc describe etcd/cluster | grep "Backend Quota" Example output Backend Quota Gi B: 10 Decrease the disk quota value by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 8}}' Example error message The Etcd "cluster" is invalid: spec.backendQuotaGiB: Invalid value: "integer": etcd backendQuotaGiB may not be decreased To resolve this issue, specify an integer greater than 10 .
[ "oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster", "providerSpec: value: instanceType: <compatible_aws_instance_type> 1", "apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring", "oc create -f cluster-monitoring-config.yaml", "sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf", "sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf", "oc debug node/<node_name>", "lsblk", "#!/bin/bash set -uo pipefail for device in <device_type_glob>; do 1 /usr/sbin/blkid \"USD{device}\" &> /dev/null if [ USD? == 2 ]; then echo \"secondary device found USD{device}\" echo \"creating filesystem for etcd mount\" mkfs.xfs -L var-lib-etcd -f \"USD{device}\" &> /dev/null udevadm settle touch /etc/var-lib-etcd-mount exit fi done echo \"Couldn't find secondary block device!\" >&2 exit 77", "base64 -w0 etcd-find-secondary-device.sh", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 storage: files: - path: /etc/find-secondary-device mode: 0755 contents: source: data:text/plain;charset=utf-8;base64,<encoded_etcd_find_secondary_device_script> 1 systemd: units: - name: find-secondary-device.service enabled: true contents: | [Unit] Description=Find secondary device DefaultDependencies=false After=systemd-udev-settle.service Before=local-fs-pre.target ConditionPathExists=!/etc/var-lib-etcd-mount [Service] RemainAfterExit=yes ExecStart=/etc/find-secondary-device RestartForceExitStatus=77 [Install] WantedBy=multi-user.target - name: var-lib-etcd.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-label/var-lib-etcd Where=/var/lib/etcd Type=xfs TimeoutSec=120s [Install] RequiredBy=local-fs.target - name: sync-var-lib-etcd-to-etcd.service enabled: true contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=/usr/sbin/setsebool -P rsync_full_access 1 ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/usr/sbin/semanage fcontext -a -t container_var_lib_t '/var/lib/etcd(/.*)?' ExecStart=/usr/sbin/setsebool -P rsync_full_access 0 TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target - name: restorecon-var-lib-etcd.service enabled: true contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target", "oc debug node/<node_name>", "grep -w \"/var/lib/etcd\" /proc/mounts", "/dev/sdb /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0", "etcd member has been defragmented: <member_name> , memberID: <member_id>", "failed defrag on member: <member_name> , memberID: <member_id> : <error_message>", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "oc describe etcd/cluster | grep \"Control Plane Hardware Speed\"", "Control Plane Hardware Speed: <VALUE>", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"controlPlaneHardwareSpeed\": \"<value>\"}}'", "etcd.operator.openshift.io/cluster patched", "The Etcd \"cluster\" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value: \"Faster\": supported values: \"\", \"Standard\", \"Slower\"", "oc describe etcd/cluster | grep \"Control Plane Hardware Speed\"", "Control Plane Hardware Speed: \"\"", "oc get pods -n openshift-etcd -w", "installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 1s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 2s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 34s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Pending 0 0s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:1/3 0 1s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:2/3 0 2s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 PodInitializing 0 3s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 4s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 20s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Running 0 20s", "oc describe -n openshift-etcd pod/<ETCD_PODNAME> | grep -e HEARTBEAT_INTERVAL -e ELECTION_TIMEOUT", "oc describe etcd/cluster | grep \"Backend Quota\"", "Backend Quota Gi B: <value>", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": <value>}}'", "etcd.operator.openshift.io/cluster patched", "oc describe etcd/cluster | grep \"Backend Quota\"", "oc get pods -n openshift-etcd", "NAME READY STATUS RESTARTS AGE etcd-ci-ln-b6kfsw2-72292-mzwbq-master-0 4/4 Running 0 39m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-1 4/4 Running 0 37m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-2 4/4 Running 0 41m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-0 1/1 Running 0 51m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-1 1/1 Running 0 49m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-2 1/1 Running 0 54m installer-5-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 51m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 46m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 44m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 49m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 40m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 38m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 43m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m", "oc describe -n openshift-etcd pod/<etcd_podname> | grep \"ETCD_QUOTA_BACKEND_BYTES\"", "ETCD_QUOTA_BACKEND_BYTES: 8589934592", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": 5}}'", "The Etcd \"cluster\" is invalid: * spec.backendQuotaGiB: Invalid value: 5: spec.backendQuotaGiB in body should be greater than or equal to 8 * spec.backendQuotaGiB: Invalid value: \"integer\": etcd backendQuotaGiB may not be decreased", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": 64}}'", "The Etcd \"cluster\" is invalid: spec.backendQuotaGiB: Invalid value: 64: spec.backendQuotaGiB in body should be less than or equal to 32", "oc describe etcd/cluster | grep \"Backend Quota\"", "Backend Quota Gi B: 10", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": 8}}'", "The Etcd \"cluster\" is invalid: spec.backendQuotaGiB: Invalid value: \"integer\": etcd backendQuotaGiB may not be decreased" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/scalability_and_performance/recommended-performance-and-scalability-practices-2
2.3. Monitoring Performance in Virtual Machine Manager
2.3. Monitoring Performance in Virtual Machine Manager You can use the Virtual Machine Monitor to view performance information for any virtual machine on your system. You can also configure the performance information displayed in Virtual Machine Manager. 2.3.1. Viewing a Performance Overview in Virtual Machine Manager To view a performance overview for a virtual machine using Virtual Machine Manager: In the Virtual Machine Manager main window, highlight the virtual machine that you want to view. Figure 2.1. Selecting a virtual machine to display From the Virtual Machine Manager Edit menu, select Virtual Machine Details . When the Virtual Machine details window opens, there may be a console displayed. Should this happen, click View and then select Details . The Overview window opens first by default. Select Performance from the navigation pane on the left hand side. The Performance view shows a summary of guest performance, including CPU and Memory usage and Disk and Network input and output. Figure 2.2. Displaying guest performance details 2.3.2. Performance Monitoring Performance monitoring preferences can be modified with virt-manager 's preferences window. To configure performance monitoring: From the Edit menu, select Preferences . The Preferences window appears. From the Polling tab specify the time in seconds or stats polling options. Figure 2.3. Configuring performance monitoring 2.3.3. Displaying CPU Usage for Guests To view the CPU usage for all guests on your system: From the View menu, select Graph , then the Guest CPU Usage check box. The Virtual Machine Manager shows a graph of CPU usage for all virtual machines on your system. Figure 2.4. Guest CPU usage graph 2.3.4. Displaying CPU Usage for Hosts To view the CPU usage for all hosts on your system: From the View menu, select Graph , then the Host CPU Usage check box. The Virtual Machine Manager shows a graph of host CPU usage on your system. Figure 2.5. Host CPU usage graph 2.3.5. Displaying Disk I/O To view the disk I/O for all virtual machines on your system: Make sure that the Disk I/O statistics collection is enabled. To do this, from the Edit menu, select Preferences and click the Polling tab. Select the Disk I/O check box. Figure 2.6. Enabling Disk I/O To enable the Disk I/O display, from the View menu, select Graph , then the Disk I/O check box. The Virtual Machine Manager shows a graph of Disk I/O for all virtual machines on your system. Figure 2.7. Displaying Disk I/O 2.3.6. Displaying Network I/O To view the network I/O for all virtual machines on your system: Make sure that the Network I/O statistics collection is enabled. To do this, from the Edit menu, select Preferences and click the Polling tab. Select the Network I/O check box. Figure 2.8. Enabling Network I/O To display the Network I/O statistics, from the View menu, select Graph , then the Network I/O check box. The Virtual Machine Manager shows a graph of Network I/O for all virtual machines on your system. Figure 2.9. Displaying Network I/O 2.3.7. Displaying Memory Usage To view the memory usage for all virtual machines on your system: Make sure that the memory usage statistics collection is enabled. To do this, from the Edit menu, select Preferences and click the Polling tab. Select the Poll Memory stats check box. Figure 2.10. Enabling memory usage To display the memory usage, from the View menu, select Graph , then the Memory Usage check box. The Virtual Machine Manager lists the percentage of memory in use (in megabytes) for all virtual machines on your system. Figure 2.11. Displaying memory usage
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-monitoring_in_virt_manager
Chapter 2. Initial Load Balancer Add-On Configuration
Chapter 2. Initial Load Balancer Add-On Configuration After installing Red Hat Enterprise Linux, you must take some basic steps to set up the LVS router and the real servers. This chapter covers these initial steps in detail. Note The LVS router node that becomes the active node once Load Balancer Add-On is started is also referred to as the primary node . When configuring Load Balancer Add-On, use the Piranha Configuration Tool on the primary node. 2.1. Configuring Services on the LVS Router The Red Hat Enterprise Linux installation program installs all of the components needed to set up Load Balancer Add-On, but the appropriate services must be activated before configuring Load Balancer Add-On. For the LVS router, set the appropriate services to start at boot time. There are three primary tools available for setting services to activate at boot time under Red Hat Enterprise Linux : the command line program chkconfig , the ncurses-based program ntsysv , and the graphical Services Configuration Tool . All of these tools require root access. Note To attain root access, open a shell prompt and use the su - command followed by the root password. For example: On the LVS router, there are three services which need to be set to activate at boot time: The piranha-gui service (primary node only) The pulse service The sshd service If you are clustering multi-port services or using firewall marks, you must also enable the iptables service. It is best to set these services to activate in both runlevel 3 and runlevel 5. To accomplish this using chkconfig , type the following command for each service: /sbin/chkconfig --level 35 daemon on In the above command, replace daemon with the name of the service you are activating. To get a list of services on the system as well as what runlevel they are set to activate on, issue the following command: /sbin/chkconfig --list Warning Turning any of the above services on using chkconfig does not actually start the daemon. To do this use the /sbin/service command. See Section 2.3, "Starting the Piranha Configuration Tool Service" for an example of how to use the /sbin/service command.
[ "su - Password: root password" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/ch-initial-setup-VSA
Chapter 2. Installing .NET 6.0
Chapter 2. Installing .NET 6.0 To install .NET on RHEL 7 you need to first enable the .NET software repositories and install the scl tool. Prerequisites Installed and registered RHEL 7 with attached subscriptions. For more information, see Registering the System and Attaching Subscriptions . Procedure Enable the .NET software repositories: Replace variant with server , workstation or hpc-node depending on what RHEL system you are running (RHEL 7 Server, RHEL 7 Workstation, or HPC Compute Node, respectively). Verify the list of subscriptions attached to your system: Install the scl tool: Install .NET 6.0 and all of its dependencies: Enable the rh-dotnet60 Software Collection environment: You can now run dotnet commands in this bash shell session. If you log out, use another shell, or open up a new terminal, the dotnet command is no longer enabled. Warning Red Hat does not recommend permanently enabling rh-dotnet60 because it may affect other programs. If you want to enable rh-dotnet permanently, add source scl_source enable rh-dotnet60 to your ~/.bashrc file. Verification steps Verify the installation: The output returns the relevant information about the .NET installation and the environment.
[ "sudo subscription-manager repos --enable=rhel-7- variant -dotnet-rpms", "sudo subscription-manager list --consumed", "sudo yum install scl-utils -y", "sudo yum install rh-dotnet60 -y", "scl enable rh-dotnet60 bash", "dotnet --info" ]
https://docs.redhat.com/en/documentation/net/6.0/html/getting_started_with_.net_on_rhel_7/installing-dotnet_getting-started-with-dotnet-on-rhel-7
4.2. Example - Laptop
4.2. Example - Laptop One other very common place where power management and savings can really make a difference are laptops. As laptops by design normally already use drastically less energy than workstations or servers the potential for absolute savings are less than for other machines. When in battery mode, though, any saving can help to get a few more minutes of battery life out of a laptop. Although this section focuses on laptops in battery mode, but you certainly can still use some or all of those tunings while running on AC power as well. Savings for single components usually make a bigger relative difference on laptops than they do on workstations. For example, a 1 Gbit/s network interface running at 100 Mbits/s saves around 3-4 watts. For a typical server with a total power consumption of around 400 watts, this saving is approximately 1 %. On a laptop with a total power consumption of around 40 watts, the power saving on just this one component amounts to 10 % of the total. Specific power-saving optimizations on a typical laptop include: Configure the system BIOS to disable all hardware that you do not use. For example, parallel or serial ports, card readers, webcams, WiFi, and Bluetooth just to name a few possible candidates. Dim the display in darker environments where you do not need full illumination to read the screen comfortably. Use System + Preferences Power Management on the GNOME desktop, Kickoff Application Launcher + Computer + System Settings + Advanced Power Management on the KDE desktop; or gnome-power-manager or xbacklight at the command line; or the function keys on your laptop. Additionally, (or alternatively) you can perform many small adjustments to various system settings: use the ondemand governor (enabled by default in Red Hat Enterprise Linux 7) enable AC97 audio power-saving (enabled by default in Red Hat Enterprise Linux 7): enable USB auto-suspend: Note that USB auto-suspend does not work correctly with all USB devices. mount file system using relatime (default in Red Hat Enterprise Linux 7): reduce screen brightness to 50 or less, for example: activate DPMS for screen idle: deactivate Wi-Fi:
[ "~]# echo Y > /sys/module/snd_ac97_codec/parameters/power_save", "~]# for i in /sys/bus/usb/devices/*/power/autosuspend; do echo 1 > USDi; done", "~]# mount -o remount,relatime mountpoint", "~]USD xbacklight -set 50", "~]USD xset +dpms; xset dpms 0 0 300", "~]# echo 1 > /sys/bus/pci/devices/*/rf_kill" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/example_laptop
Chapter 2. Fixed Issues
Chapter 2. Fixed Issues 2.1. AMQ JMS CVE-2020-9488 - Improper validation of certificate with host mismatch in Apache Log4j SMTP appender This issue was resolved by updating the client's Log4j dependency. 2.2. AMQ Resource Adapter CVE-2020-14297 , CVE-2020-14307 , CVE-2020-11113 - Vulnerabilities in the example code In earlier releases of the product, AMQ Resource Adapter contained an example program subject to the listed vulnerabilities. In this release, the vulnerabilities are addressed in a new example program.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/amq_clients_2.8_release_notes/fixed_issues
Chapter 13. Validating schemas with the Red Hat build of Apicurio Registry
Chapter 13. Validating schemas with the Red Hat build of Apicurio Registry You can use the Red Hat build of Apicurio Registry with Streams for Apache Kafka. Apicurio Registry is a datastore for sharing standard event schemas and API designs across API and event-driven architectures. You can use Apicurio Registry to decouple the structure of your data from your client applications, and to share and manage your data types and API descriptions at runtime using a REST interface. Apicurio Registry stores schemas used to serialize and deserialize messages, which can then be referenced from your client applications to ensure that the messages that they send and receive are compatible with those schemas. Apicurio Registry provides Kafka client serializers/deserializers for Kafka producer and consumer applications. Kafka producer applications use serializers to encode messages that conform to specific event schemas. Kafka consumer applications use deserializers, which validate that the messages have been serialized using the correct schema, based on a specific schema ID. You can enable your applications to use a schema from the registry. This ensures consistent schema usage and helps to prevent data errors at runtime. Additional resources Red Hat build of Apicurio Registry product documentation Red Hat build of Apicurio Registry is built on the Apicurio Registry open source community project available on GitHub: Apicurio/apicurio-registry
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/service-registry-concepts-str
Chapter 9. Operators
Chapter 9. Operators 9.1. Using Operators with MicroShift You can use Operators with MicroShift to create applications that monitor the running services in your cluster. Operators can manage applications and their resources, such as deploying a database or message bus. As customized software running inside your cluster, Operators can be used to implement and automate common operations. Operators offer a more localized configuration experience and integrate with Kubernetes APIs and CLI tools such as kubectl and oc . Operators are designed specifically for your applications. Operators enable you to configure components instead of modifying a global configuration file. MicroShift applications are generally expected to be deployed in static environments. However, Operators are available if helpful in your use case. To determine the compatibility of an Operator with MicroShift, check the Operator documentation. 9.1.1. How to use Operators with MicroShift clusters There are two ways to use Operators for your MicroShift clusters: 9.1.1.1. Manifests for Operators Operators can be installed and managed directly by using manifests. You can use the kustomize configuration management tool with MicroShift to deploy an application. Use the same steps to install Operators with manifests. See Using Kustomize manifests to deploy applications and Using manifests example for details. 9.1.1.2. Operator Lifecycle Manager for Operators You can also install add-on Operators to a MicroShift cluster using Operator Lifecycle Manager (OLM). OLM can be used to manage both custom Operators and Operators that are widely available. Building catalogs is required to use OLM with MicroShift. For details, see Using Operator Lifecycle Manager with MicroShift . 9.2. Using Operator Lifecycle Manager with MicroShift The Operator Lifecycle Manager (OLM) package manager is used in MicroShift for installing and running optional add-on Operators . 9.2.1. Considerations for using OLM with MicroShift Cluster Operators as applied in OpenShift Container Platform are not used in MicroShift. You must create your own catalogs for the add-on Operators you want to use with your applications. Catalogs are not provided by default. Each catalog must have an accessible CatalogSource added to a cluster, so that the OLM catalog Operator can use the catalog for content. You must use the CLI to conduct OLM activities with MicroShift. The console and OperatorHub GUIs are not available. Use the Operator Package Manager opm CLI with network-connected clusters, or for building catalogs for custom Operators that use an internal registry. To mirror your catalogs and Operators for disconnected or offline clusters, install the oc-mirror OpenShift CLI plugin . Important Before using an Operator, verify with the provider that the Operator is supported on Red Hat build of MicroShift. 9.2.2. Determining your OLM installation type You can install the OLM package manager for use with MicroShift 4.15 or newer versions. There are different ways to install OLM for MicroShift clusters, depending on your use case. You can install the microshift-olm RPM at the same time you install the MicroShift RPM on Red Hat Enterprise Linux (RHEL). You can install the microshift-olm on an existing MicroShift 4.18. Restart the MicroShift service after installing OLM for the changes to apply. See Installing the Operator Lifecycle Manager (OLM) from an RPM package . You can embed OLM in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image. See Adding the Operator Lifecycle Manager (OLM) service to a blueprint . 9.2.3. Namespace use in MicroShift The microshift-olm RPM creates the three default namespaces: one for running OLM, and two for catalog and Operator installation. You can create additional namespaces as needed for your use case. 9.2.3.1. Default namespaces The following table lists the default namespaces and a brief description of how each namespace works. Table 9.1. Default namespaces created by OLM for MicroShift Default Namespace Details openshift-operator-lifecycle-manager The OLM package manager runs in this namespace. openshift-marketplace The global namespace. Empty by default. To make the catalog source to be available globally to users in all namespaces, set the openshift-marketplace namespace in the catalog-source YAML. openshift-operators The default namespace where Operators run in MicroShift. Operators that reference catalogs in the openshift-operators namespace must have the AllNamespaces watch scope. 9.2.3.2. Custom namespaces If you want to use a catalog and Operator together in a single namespace, then you must create a custom namespace. After you create the namespace, you must create the catalog in that namespace. All Operators running in the custom namespace must have the same single-namespace watch scope. 9.2.4. About building Operator catalogs To use Operator Lifecycle Manager (OLM) with MicroShift, you must build custom Operator catalogs that you can then manage with OLM. The standard catalogs that are included with OpenShift Container Platform are not included with MicroShift. 9.2.4.1. File-based Operator catalogs You can create catalogs for your custom Operators or filter catalogs of widely available Operators. You can combine both methods to create the catalogs needed for your specific use case. To run MicroShift with your own Operators and OLM, make a catalog by using the file-based catalog structure. For details, see Managing custom catalogs and Example catalog . See also opm CLI reference . Important When adding a catalog source to a cluster , set the securityContextConfig value to restricted in the catalogSource.yaml file. Ensure that your catalog can run with restricted permissions. Additional resources opm CLI reference About Operator catalogs To create file-based catalogs by using the opm CLI, see Managing custom catalogs 9.2.5. How to deploy Operators using OLM After you create and deploy your custom catalog, you must create a Subscription custom resource (CR) that can access the catalog and install the Operators you choose. Where Operators run depends on the namespace in which you create the Subscription CR. Important Operators in OLM have a watch scope. For example, some Operators only support watching their own namespace, while others support watching every namespace in the cluster. All Operators installed in a given namespace must have the same watch scope. 9.2.5.1. Connectivity and OLM Operator deployment Operators can be deployed anywhere a catalog is running. For clusters that are connected to the internet, mirroring images is not required. Images can be pulled over the network. For restricted networks in which MicroShift has access to an internal network only, images must be mirrored to an internal registry. For use cases in which MicroShift clusters are completely offline, all images must be embedded into an osbuild blueprint. Additional resources Operator group membership 9.2.5.2. Adding OLM-based Operators to a networked cluster using the global namespace To deploy different operators to different namespaces, use this procedure. For MicroShift clusters that have network connectivity, Operator Lifecycle Manager (OLM) can access sources hosted on remote registries. The following procedure lists the basic steps of using configuration files to install an Operator that uses the global namespace. Note To use an Operator installed in a different namespace, or in more than one namespace, make sure that the catalog source and the Subscription CR that references the Operator are running in the openshift-marketplace namespace. Prerequisites The OpenShift CLI ( oc ) is installed. Operator Lifecycle Manager (OLM) is installed. You have created a custom catalog in the global namespace. Procedure Confirm that OLM is running by using the following command: USD oc -n openshift-operator-lifecycle-manager get pod -l app=olm-operator Example output NAME READY STATUS RESTARTS AGE olm-operator-85b5c6786-n6kbc 1/1 Running 0 2m24s Confirm that the OLM catalog Operator is running by using the following command: USD oc -n openshift-operator-lifecycle-manager get pod -l app=catalog-operator Example output NAME READY STATUS RESTARTS AGE catalog-operator-5fc7f857b6-tj8cf 1/1 Running 0 2m33s Note The following steps assume you are using the global namespace, openshift-marketplace . The catalog must run in the same namespace as the Operator. The Operator must support the AllNamespaces mode. Create the CatalogSource object by using the following example YAML: Example catalog source YAML apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: operatorhubio-catalog namespace: openshift-marketplace 1 spec: sourceType: grpc image: quay.io/operatorhubio/catalog:latest displayName: Community Operators 2 publisher: OperatorHub.io grpcPodConfig: securityContextConfig: restricted 3 updateStrategy: registryPoll: interval: 60m 1 The global namespace. Setting the metadata.namespace to openshift-marketplace enables the catalog to run in all namespaces. Subscriptions in any namespace can reference catalogs created in the openshift-marketplace namespace. 2 Community Operators are not installed by default with OLM for MicroShift. Listed here for example only. 3 The value of securityContextConfig must be set to restricted for MicroShift. Apply the CatalogSource configuration by running the following command: USD oc apply -f <catalog_source.yaml> 1 1 Replace <catalog - source.yaml> with your catalog source configuration file name. In this example, catalogsource.yaml is used. Example output catalogsource.operators.coreos.com/operatorhubio-catalog created To verify that the catalog source is applied, check for the READY state by using the following command: USD oc describe catalogsources.operators.coreos.com -n openshift-marketplace operatorhubio-catalog Example output Name: operatorhubio-catalog Namespace: openshift-marketplace Labels: <none> Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Metadata: Creation Timestamp: 2024-01-31T09:55:31Z Generation: 1 Resource Version: 1212 UID: 4edc1a96-83cd-4de9-ac8c-c269ca895f3e Spec: Display Name: Community Operators Grpc Pod Config: Security Context Config: restricted Image: quay.io/operatorhubio/catalog:latest Publisher: OperatorHub.io Source Type: grpc Update Strategy: Registry Poll: Interval: 60m Status: Connection State: Address: operatorhubio-catalog.openshift-marketplace.svc:50051 Last Connect: 2024-01-31T09:55:57Z Last Observed State: READY 1 Registry Service: Created At: 2024-01-31T09:55:31Z Port: 50051 Protocol: grpc Service Name: operatorhubio-catalog Service Namespace: openshift-marketplace Events: <none> 1 The status is reported as READY . Confirm that the catalog source is running by using the following command: USD oc get pods -n openshift-marketplace -l olm.catalogSource=operatorhubio-catalog Example output NAME READY STATUS RESTARTS AGE operatorhubio-catalog-x24nh 1/1 Running 0 59s Create a Subscription CR configuration file by using the following example YAML: Example Subscription custom resource YAML apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-cert-manager namespace: openshift-operators spec: channel: stable name: cert-manager source: operatorhubio-catalog sourceNamespace: openshift-marketplace 1 1 The global namespace. Setting the sourceNamespace value to openshift-marketplace enables Operators to run in multiple namespaces if the catalog also runs in the openshift-marketplace namespace. Apply the Subscription CR configuration by running the following command: USD oc apply -f <subscription_cr.yaml> 1 1 Replace <subscription_cr.yaml> with your Subscription CR filename. Example output subscription.operators.coreos.com/my-cert-manager created You can create a configuration file for the specific Operand you want to use and apply it now. Verification Verify that your Operator is running by using the following command: USD oc get pods -n openshift-operators 1 1 The namespace from the Subscription CR is used. Note Allow a minute or two for the Operator start. Example output NAME READY STATUS RESTARTS AGE cert-manager-7df8994ddb-4vrkr 1/1 Running 0 19s cert-manager-cainjector-5746db8fd7-69442 1/1 Running 0 18s cert-manager-webhook-f858bf58b-748nt 1/1 Running 0 18s 9.2.5.3. Adding OLM-based Operators to a networked cluster in a specific namespace Use this procedure if you want to specify a namespace for an Operator, for example, olm-microshift . In this example, the catalog is scoped and available in the global openshift-marketplace namespace. The Operator uses content from the global namespace, but runs only in the olm-microshift namespace. For MicroShift clusters that have network connectivity, Operator Lifecycle Manager (OLM) can access sources hosted on remote registries. Important All of the Operators installed in a specific namespace must have the same watch scope. In this case, the watch scope is OwnNamespace . Prerequisites The OpenShift CLI ( oc ) is installed. Operator Lifecycle Manager (OLM) is installed. You have created a custom catalog that is running in the global namespace. Procedure Confirm that OLM is running by using the following command: USD oc -n openshift-operator-lifecycle-manager get pod -l app=olm-operator Example output NAME READY STATUS RESTARTS AGE olm-operator-85b5c6786-n6kbc 1/1 Running 0 16m Confirm that the OLM catalog Operator is running by using the following command: USD oc -n openshift-operator-lifecycle-manager get pod -l app=catalog-operator Example output NAME READY STATUS RESTARTS AGE catalog-operator-5fc7f857b6-tj8cf 1/1 Running 0 16m Create a namespace by using the following example YAML: Example namespace YAML apiVersion: v1 kind: Namespace metadata: name: olm-microshift Apply the namespace configuration using the following command: USD oc apply -f <ns.yaml> 1 1 Replace <ns.yaml> with the name of your namespace configuration file. In this example, olm-microshift is used. Example output namespace/olm-microshift created Create the Operator group YAML by using the following example YAML: Example Operator group YAML kind: OperatorGroup apiVersion: operators.coreos.com/v1 metadata: name: og namespace: olm-microshift spec: 1 targetNamespaces: - olm-microshift 1 For Operators using the global namespace, omit the spec.targetNamespaces field and values. Apply the Operator group configuration by running the following command: USD oc apply -f <og.yaml> 1 1 Replace <og.yaml> with the name of your operator group configuration file. Example output operatorgroup.operators.coreos.com/og created Create the CatalogSource object by using the following example YAML: Example catalog source YAML apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: operatorhubio-catalog namespace: openshift-marketplace 1 spec: sourceType: grpc image: quay.io/operatorhubio/catalog:latest displayName: Community Operators 2 publisher: OperatorHub.io grpcPodConfig: securityContextConfig: restricted 3 updateStrategy: registryPoll: interval: 60m 1 The global namespace. Setting the metadata.namespace to openshift-marketplace enables the catalog to run in all namespaces. Subscriptions CRs in any namespace can reference catalogs created in the openshift-marketplace namespace. 2 Community Operators are not installed by default with OLM for MicroShift. Listed here for example only. 3 The value of securityContextConfig must be set to restricted for MicroShift. Apply the CatalogSource configuration by running the following command: USD oc apply -f <catalog_source.yaml> 1 1 Replace <catalog_source.yaml> with your catalog source configuration file name. To verify that the catalog source is applied, check for the READY state by using the following command: USD oc describe catalogsources.operators.coreos.com -n openshift-marketplace operatorhubio-catalog Example output Name: operatorhubio-catalog Namespace: openshift-marketplace Labels: <none> Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Metadata: Creation Timestamp: 2024-01-31T10:09:46Z Generation: 1 Resource Version: 2811 UID: 60ce4a36-86d3-4921-b9fc-84d67c28df48 Spec: Display Name: Community Operators Grpc Pod Config: Security Context Config: restricted Image: quay.io/operatorhubio/catalog:latest Publisher: OperatorHub.io Source Type: grpc Update Strategy: Registry Poll: Interval: 60m Status: Connection State: Address: operatorhubio-catalog.openshift-marketplace.svc:50051 Last Connect: 2024-01-31T10:10:04Z Last Observed State: READY 1 Registry Service: Created At: 2024-01-31T10:09:46Z Port: 50051 Protocol: grpc Service Name: operatorhubio-catalog Service Namespace: openshift-marketplace Events: <none> 1 The status is reported as READY . Confirm that the catalog source is running by using the following command: USD oc get pods -n openshift-marketplace -l olm.catalogSource=operatorhubio-catalog Example output NAME READY STATUS RESTARTS AGE operatorhubio-catalog-j7sc8 1/1 Running 0 43s Create a Subscription CR configuration file by using the following example YAML: Example Subscription custom resource YAML apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-gitlab-operator-kubernetes namespace: olm-microshift 1 spec: channel: stable name: gitlab-operator-kubernetes source: operatorhubio-catalog sourceNamespace: openshift-marketplace 2 1 The specific namespace. Operators reference the global namespace for content, but run in the olm-microshift namespace. 2 The global namespace. Subscriptions CRs in any namespace can reference catalogs created in the openshift-marketplace namespace. Apply the Subscription CR configuration by running the following command: USD oc apply -f <subscription_cr.yaml> 1 1 Replace <subscription_cr.yaml> with the name of the Subscription CR configuration file. Example output subscription.operators.coreos.com/my-gitlab-operator-kubernetes You can create a configuration file for the specific Operand you want to use and apply it now. Verification Verify that your Operator is running by using the following command: USD oc get pods -n olm-microshift 1 1 The namespace from the Subscription CR is used. Note Allow a minute or two for the Operator start. Example output NAME READY STATUS RESTARTS AGE gitlab-controller-manager-69bb6df7d6-g7ntx 2/2 Running 0 3m24s Additional resources Updating installed Operators Deleting Operators from a cluster using the CLI 9.3. Creating custom catalogs using the oc-mirror plugin You can create custom catalogs with widely available Operators and mirror them by using the oc-mirror OpenShift CLI (oc) plugin. 9.3.1. Using Red Hat-provided Operator catalogs and mirror registries You can filter and prune catalogs to get specific Operators and mirror them by using the oc-mirror OpenShift CLI (oc) plugin. You can also use Operators in disconnected settings or embedded in Red Hat Enterprise Linux for Edge (RHEL for Edge) images. To read more details about how to configure your systems for mirroring, use the links in the following "Additional resources" section. If you are ready to deploy Operators from Red Hat-provided Operator catalogs, mirror them, or to embed them in RHEL for Edge images, start with the following section, "Inspecting catalog contents by using the oc-mirror plugin." Additional resources Using Operator Lifecycle Manager on restricted networks Configuring hosts for mirror registry access Configuring network settings for fully disconnected hosts Getting the mirror registry container image list Embedding in a RHEL for Edge image for offline use 9.3.2. About the oc-mirror plugin for creating a mirror registry You can use the oc-mirror OpenShift CLI (oc) plugin with MicroShift to filter and prune Operator catalogs. You can then mirror the filtered catalog contents to a mirror registry or use the container images in disconnected or offline deployments with RHEL for Edge. Note MicroShift uses the generally available version (1) of the oc-mirror plugin. Do not use the following procedures with the Technical Preview version (2) of oc-mirror plugin. You can mirror the container images required by the desired Operators locally or to a container mirror registry that supports Docker v2-2 , such as Red Hat Quay. The procedure to mirror content from Red Hat-hosted registries connected to the internet to a disconnected image registry is the same, independent of the registry you choose. After you mirror the contents of your catalog, configure each cluster to retrieve this content from your mirror registry. 9.3.2.1. Connectivity considerations when populating a mirror registry When you populate your registry, you can use one of following connectivity scenarios: Connected mirroring If you have a host that can access both the internet and your mirror registry, but not your cluster node, you can directly mirror the content from that machine. Disconnected mirroring If you do not have a host that can access both the internet and your mirror registry, you must mirror the images to a file system and then bring that host or removable media into your disconnected environment. Important A container registry must be reachable by every machine in the clusters that you provision. Installing, updating, and other operations, such as relocating workloads, might fail if the registry is unreachable. To avoid problems caused by an unreachable registry, use the following standard practices: Run mirror registries in a highly available way. Ensure that the mirror registry at least matches the production availability of your clusters. Additional resources Installing the oc mirror plugin 9.3.2.2. Inspecting catalog contents by using the oc-mirror plugin Use the following example procedure to select a catalog and list Operators from available OpenShift Container Platform content to add to your oc-mirror plugin image set configuration file. Note If you use your own catalogs and Operators, you can push the images directly to your internal registry. Prerequisites The OpenShift CLI ( oc ) is installed. Operator Lifecycle Manager (OLM) is installed. The oc-mirror OpenShift CLI (oc) plugin is installed. Procedure Get a list of available Red Hat-provided Operator catalogs to filter by running the following command: USD oc mirror list operators --version 4.18 --catalogs Get a list of Operators in the Red Hat Operators catalog by running the following command: USD oc mirror list operators <--catalog=<catalog_source>> 1 1 Specifies your catalog source, such as registry.redhat.io/redhat/redhat-operator-index:v4.18 or quay.io/operatorhubio/catalog:latest . Select an Operator. For this example, amq-broker-rhel8 is selected. Optional: To inspect the channels and versions of the Operator you want to filter, enter the following commands: Get a list of channels by running the following command: USD oc mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.17 --package=amq-broker-rhel8 Get a list of versions within a channel by running the following command: USD oc mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.17 --package=amq-broker-rhel8 --channel=7.11.x steps Create and edit an image set configuration file using the information gathered in this procedure. Mirror the images from the transformed image set configuration file to a mirror registry or disk. 9.3.2.3. Creating an image set configuration file You must create an image set configuration file to mirror catalog contents with the oc-mirror plugin. The image set configuration file defines which Operators to mirror along with other configuration settings for the oc-mirror plugin. After generating a default image set file, you must edit the contents so that remaining entries are compatible with both MicroShift and the Operator you plan to use. You must specify a storage backend in the image set configuration file. This storage backend can be a local directory or a registry that supports Docker v2-2 . The oc-mirror plugin stores metadata in this storage backend during image set creation. Important Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry. Prerequisites You have created a container image registry credentials file. See Configuring credentials that allow images to be mirrored . Procedure Use the oc mirror init command to create a template for the image set configuration and save it to a file called imageset-config.yaml : USD oc mirror init --registry <storage_backend> > imageset-config.yaml 1 1 Specifies the location of your storage backend, such as example.com/mirror/oc-mirror-metadata . Example default image set configuration file kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: registry.example.com/oc-mirror skipTLS: false mirror: platform: 1 channels: - name: stable-4.18 type: ocp operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: serverless-operator channels: - name: stable additionalImages: 2 - name: registry.redhat.io/ubi8/ubi:latest helm: {} 3 1 The platform field and related fields are not supported by MicroShift and must be deleted. 2 Specify any additional images to include in the image set. If you do not need to specify additional images, delete this field. 3 Helm is not supported by MicroShift and must be deleted. Edit the values of your image set configuration file to meet the requirements of both MicroShift and the Operator you want to mirror, like the following example: Example edited MicroShift image set configuration file kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: 1 registry: imageURL: <storage_backend> 2 skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 3 packages: - name: amq-broker-rhel8 4 channels: - name: 7.11.x 5 1 Set the backend location where the image set metadata is saved. This location can be a registry or local directory. It is required to specify storageConfig values. 2 Set the registry URL for the storage backend, such as <example.com/mirror/oc-mirror-metadata . 3 Set the Operator catalog to retrieve images from. 4 Specify the Operator packages to include in the image set. Remove this field to retrieve all packages in the catalog. 5 Specify only certain channels of the Operator packages to include in the image set. You must always include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: oc mirror list operators --catalog=<catalog_name> --package=<package_name> . Save the updated file. steps Use the oc-mirror plugin to mirror an image set directly to a target mirror registry. Configure CRI-O. Apply the catalog sources to your clusters. 9.3.2.3.1. Image set configuration parameters The oc-mirror plugin requires an image set configuration file that defines what images to mirror. The following table lists the available parameters for the ImageSetConfiguration resource. Table 9.2. ImageSetConfiguration parameters Parameter Description Values apiVersion The API version for the ImageSetConfiguration content. String. For example: mirror.openshift.io/v1alpha2 . mirror The configuration of the image set. Object mirror.additionalImages The additional images configuration of the image set. Array of objects. For example: additionalImages: - name: registry.redhat.io/ubi8/ubi:latest mirror.additionalImages.name The tag or digest of the image to mirror. String. For example: registry.redhat.io/ubi8/ubi:latest mirror.blockedImages The full tag, digest, or pattern of images to block from mirroring. Array of strings. For example: docker.io/library/alpine mirror.operators The Operators configuration of the image set. Array of objects. For example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: elasticsearch-operator minVersion: '2.4.0' mirror.operators.catalog The Operator catalog to include in the image set. String. For example: registry.redhat.io/redhat/redhat-operator-index:v4.18 . mirror.operators.full When true , downloads the full catalog, Operator package, or Operator channel. Boolean. The default value is false . mirror.operators.packages The Operator packages configuration. Array of objects. For example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: elasticsearch-operator minVersion: '5.2.3-31' mirror.operators.packages.name The Operator package name to include in the image set String. For example: elasticsearch-operator . mirror.operators.packages.channels The Operator package channel configuration. Object mirror.operators.packages.channels.name The Operator channel name, unique within a package, to include in the image set. String. For example: fast or stable-v4.18 . mirror.operators.packages.channels.maxVersion The highest version of the Operator mirror across all channels in which it exists. See the following note for further information. String. For example: 5.2.3-31 mirror.operators.packages.channels.minBundle The name of the minimum bundle to include, plus all bundles in the update graph to the channel head. Set this field only if the named bundle has no semantic version metadata. String. For example: bundleName mirror.operators.packages.channels.minVersion The lowest version of the Operator to mirror across all channels in which it exists. See the following note for further information. String. For example: 5.2.3-31 mirror.operators.packages.maxVersion The highest version of the Operator to mirror across all channels in which it exists. See the following note for further information. String. For example: 5.2.3-31 . mirror.operators.packages.minVersion The lowest version of the Operator to mirror across all channels in which it exists. See the following note for further information. String. For example: 5.2.3-31 . mirror.operators.skipDependencies If true , dependencies of bundles are not included. Boolean. The default value is false . mirror.operators.targetCatalog An alternative name and optional namespace hierarchy to mirror the referenced catalog as. String. For example: my-namespace/my-operator-catalog mirror.operators.targetName An alternative name to mirror the referenced catalog as. The targetName parameter is deprecated. Use the targetCatalog parameter instead. String. For example: my-operator-catalog mirror.operators.targetTag An alternative tag to append to the targetName or targetCatalog . String. For example: v1 storageConfig The back-end configuration of the image set. Object storageConfig.local The local back-end configuration of the image set. Object storageConfig.local.path The path of the directory to contain the image set metadata. String. For example: ./path/to/dir/ . storageConfig.registry The registry back-end configuration of the image set. Object storageConfig.registry.imageURL The back-end registry URI. Can optionally include a namespace reference in the URI. String. For example: quay.io/myuser/imageset:metadata . storageConfig.registry.skipTLS Optionally skip TLS verification of the referenced back-end registry. Boolean. The default value is false . Note Using the minVersion and maxVersion properties to filter for a specific Operator version range can result in a multiple channel heads error. The error message states that there are multiple channel heads . This is because when the filter is applied, the update graph of the Operator is truncated. Operator Lifecycle Manager requires that every Operator channel contains versions that form an update graph with exactly one end point, that is, the latest version of the Operator. When the filter range is applied, that graph can turn into two or more separate graphs or a graph that has more than one end point. To avoid this error, do not filter out the latest version of an Operator. If you still run into the error, depending on the Operator, either the maxVersion property must be increased or the minVersion property must be decreased. Because every Operator graph can be different, you might need to adjust these values until the error resolves. Additional resources Imageset configuration examples 9.3.2.4. Mirroring from mirror to mirror You can use the oc-mirror plugin to mirror an image set directly to a target mirror registry that is accessible during image set creation. You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a Docker v2 registry. The oc-mirror plugin stores metadata in this storage backend during image set creation. Important Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry. Prerequisites You have access to the internet to get the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. You have created the image set configuration file. Procedure Run the oc mirror command to mirror the images from the specified image set configuration to a specified registry: USD oc mirror --config=./<imageset-config.yaml> \ 1 docker://registry.example:5000 2 1 Specify the image set configuration file that you created. For example, imageset-config.yaml . 2 Specify the registry to mirror the image set file to. The registry must start with docker:// . If you specify a top-level namespace for the mirror registry, you must also use this same namespace on subsequent executions. Example output Rendering catalog image "registry.example.com/redhat/redhat-operator-index:v{ocp-version}" with file-based catalog Verification Navigate into the oc-mirror-workspace/ directory that was generated. Navigate into the results directory, for example, results-1639608409/ . Verify that YAML files are present for the ImageContentSourcePolicy and CatalogSource resources. Important The ImageContentSourcePolicy YAML file is used as reference content for manual configuration of CRI-O in MicroShift. You cannot apply the resource directly into a MicroShift cluster. steps Convert the ImageContentSourcePolicy YAML content for use in manually configuring CRI-O. If required, mirror the images from mirror to disk for disconnected or offline use. Configure your cluster to use the resources generated by oc-mirror. Troubleshooting Unable to retrieve source image . Additional resources Mirroring an image set in a partially disconnected environment Mirroring an image set in a fully disconnected environment 9.3.2.5. Configuring CRI-O for using a registry mirror for Operators You must transform the imageContentSourcePolicy.yaml file created with the oc-mirror plugin into a format that is compatible with the CRI-O container runtime configuration used by MicroShift. Prerequisites The OpenShift CLI ( oc ) is installed. Operator Lifecycle Manager (OLM) is installed. The oc-mirror OpenShift CLI (oc) plugin is installed. The yq binary is installed. ImageContentSourcePolicy and CatalogSource YAML files are available in the oc-mirror-workspace/results-* directory. Procedure Confirm the contents of the imageContentSourcePolicy.yaml file by running the following command: USD cat oc-mirror-workspace/<results-directory>/imageContentSourcePolicy.yaml 1 1 Specify the results directory name, such as <results-1707148826> . Example output apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: labels: operators.openshift.org/catalog: "true" name: operator-0 spec: repositoryDigestMirrors: - mirrors: - registry.<example.com>/amq7 source: registry.redhat.io/amq7 Transform the imageContentSourcePolicy.yaml into a format ready for CRI-O configuration by running the following command: yq '.spec.repositoryDigestMirrors[] as USDitem ireduce([]; . + [{"mirror": USDitem.mirrors[], "source": (USDitem | .source)}]) | .[] | "[[registry]] prefix = \"" + .source + "\" location = \"" + .mirror + "\" mirror-by-digest-only = true insecure = true "' ./icsp.yaml Example output [[registry]] prefix = "registry.redhat.io/amq7" location = "registry.example.com/amq7" mirror-by-digest-only = true insecure = true Add the output to the CRI-O configuration file in the /etc/containers/registries.conf.d/ directory: Example crio-config.yaml mirror configuration file [[registry]] prefix = "registry.redhat.io/amq7" location = "registry.example.com/amq7" mirror-by-digest-only = true insecure = true [[registry]] prefix = "" location = "quay.io" mirror-by-digest-only = true [[registry.mirror]] location = "<registry_host>:<port>" 1 insecure = false 1 Specify the host name and port of your mirror registry server, for example microshift-quay:8443 . Apply the CRI-O configuration changes by restarting MicroShift with the following command: USD sudo systemctl restart crio 9.3.2.6. Installing a custom catalog created with the oc-mirror plugin After you mirror your image set to the mirror registry, you must apply the generated CatalogSource custom resource (CR) into the cluster. The CatalogSource CR is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry. You must then create and apply a subscription CR to subscribe to your custom catalog. Prerequisites You mirrored the image set to your registry mirror. You added image reference information to the CRI-O container runtime configuration. Procedure Apply the catalog source configuration file from the results directory to create the catalog source object by running the following command: USD oc apply -f ./oc-mirror-workspace/results-1708508014/catalogSource-cs-redhat-operator-index.yaml Example catalog source configuration file apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-catalog namespace: openshift-marketplace 1 spec: sourceType: grpc image: registry.example.com/redhat/redhat-operator-index:v4.17 updateStrategy: registryPoll: interval: 60m 1 Specifies the global namespace. Setting the metadata.namespace to openshift-marketplace enables the catalog to reference catalogs in all namespaces. Subscriptions in any namespace can reference catalogs created in the openshift-marketplace namespace. Example output catalogsource.operators.coreos.com/cs-redhat-operator-index created Verify that the CatalogSource resources were successfully installed by running the following command: USD oc get catalogsource --all-namespaces Verify that the catalog source is running by using the following command: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE cs-redhat-operator-index-4227b 2/2 Running 0 2m5s Create a Subscription CR, similar to the following example: Example Subscription CR apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq-broker namespace: openshift-operators spec: channel: 7.11.x name: amq-broker-rhel8 source: cs-redhat-operator-index sourceNamespace: openshift-marketplace Apply the Subscription CR configuration by running the following command: USD oc apply -f ./<my-subscription-cr.yaml> 1 1 Specify the name of your subscription, such as my-subscription-cr.yaml . Example output subscription.operators.coreos.com/amq-broker created 9.4. Adding OLM-based Operators to a disconnected cluster You can use OLM-based Operators in disconnected situations by embedding them in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image. 9.4.1. About adding OLM-based Operators to a disconnected cluster For Operators that are installed on disconnected clusters, Operator Lifecycle Manager (OLM) by default cannot access sources hosted on remote registries because those remote sources require full internet connectivity. Therefore, you must mirror the remote registries to a highly available container registry. The following steps are required to use OLM-based Operators in disconnected situations: Include OLM in the container image list for your mirror registry. Configure the system to use your mirror registry by updating your CRI-O configuration directly. ImageContentSourcePolicy is not supported in MicroShift. Add a CatalogSource object to the cluster so that the OLM catalog Operator can use the local catalog on the mirror registry. Ensure that MicroShift is installed to run in a disconnected capacity. Ensure that the network settings are configured to run in disconnected mode. After enabling OLM in a disconnected cluster, you can continue to use your internet-connected workstation to keep your local catalog sources updated as newer versions of Operators are released. Additional resources Creating the RHEL for Edge image Embedding in a RHEL for Edge image for offline use Configuring network settings for fully disconnected hosts 9.4.1.1. Performing a dry run You can use oc-mirror to perform a dry run, without actually mirroring any images. This allows you to review the list of images that would be mirrored, as well as any images that would be pruned from the mirror registry. A dry run also allows you to catch any errors with your image set configuration early or use the generated list of images with other tools to carry out the mirroring operation. Prerequisites You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. You have created the image set configuration file. Procedure Run the oc mirror command with the --dry-run flag to perform a dry run: USD oc mirror --config=./imageset-config.yaml \ 1 docker://registry.example:5000 \ 2 --dry-run 3 1 Pass in the image set configuration file that was created. This procedure assumes that it is named imageset-config.yaml . 2 Specify the mirror registry. Nothing is mirrored to this registry as long as you use the --dry-run flag. 3 Use the --dry-run flag to generate the dry run artifacts and not an actual image set file. Example output Checking push permissions for registry.example:5000 Creating directory: oc-mirror-workspace/src/publish Creating directory: oc-mirror-workspace/src/v2 Creating directory: oc-mirror-workspace/src/charts Creating directory: oc-mirror-workspace/src/release-signatures No metadata detected, creating new workspace wrote mirroring manifests to oc-mirror-workspace/operators.1658342351/manifests-redhat-operator-index ... info: Planning completed in 31.48s info: Dry run complete Writing image mapping to oc-mirror-workspace/mapping.txt Navigate into the workspace directory that was generated: USD cd oc-mirror-workspace/ Review the mapping.txt file that was generated. This file contains a list of all images that would be mirrored. Review the pruning-plan.json file that was generated. This file contains a list of all images that would be pruned from the mirror registry when the image set is published. Note The pruning-plan.json file is only generated if your oc-mirror command points to your mirror registry and there are images to be pruned. 9.4.1.2. Getting catalogs and Operator container image references to use with RHEL for Edge in disconnected environments After performing a dry run with the oc-mirror plugin to review the list of images that you want to mirror, you must get all of the container image references, then format the output for adding to an Image Builder blueprint. Note For catalogs made for proprietary Operators, you can format image references for the Image Builder blueprint without using the following procedure. Prerequisites You have a catalog index for the Operators you want to use. You have installed the jq CLI tool. You are familiar with Image Builder blueprint files. You have an Image Builder blueprint TOML file. Procedure Parse the catalog index.json file to get the image references that you need to include in the Image Builder blueprint. You can use either the unfiltered catalog or you can filter out images that cannot be mirrored: Parse the unfiltered catalog index.json file to get the image references by running the following command: jq -r --slurp '.[] | select(.relatedImages != null) | "[[containers]]\nsource = \"" + .relatedImages[].image + "\"\n"' ./oc-mirror-workspace/src/catalogs/registry.redhat.io/redhat/redhat-operator-index/v4.17/index/index.json If you want to filter out images that cannot be mirrored, filter and parse the catalog index.json file by running the following command: USD jq -r --slurp '.[] | select(.relatedImages != null) | .relatedImages[] | select(.name | contains("ppc") or contains("s390x") | not) | "[[containers]]\\nsource = \\"" + .image + "\\"\\n"' ./oc-mirror-workspace/src/catalogs/registry.redhat.io/redhat/redhat-operator-index/v4.17/index/index.json Note This step uses the AMQ Broker Operator as an example. You can add other criteria to the jq command for further filtering as required by your use case. Example image-reference output [[containers]] source = "registry.redhat.io/amq7/amq-broker-init-rhel8@sha256:0b2126cfb6054fdf428c1f43b69e36e93a09a49ce15350e9273c98cc08c6598b" [[containers]] source = "registry.redhat.io/amq7/amq-broker-init-rhel8@sha256:0dde839c2dce7cb684094bf26523c8e16677de03149a0fff468b8c3f106e1f4f" ... ... [[containers]] source = "registry.redhat.io/amq7/amq-broker-rhel8@sha256:e8fa2a00e576ecb95561ffbdbf87b1c82d479c8791ab2c6ce741dd0d0b496d15" [[containers]] source = "registry.redhat.io/amq7/amq-broker-rhel8@sha256:ff6fefad518a6c997d4c5a6e475ba89640260167f0bc27715daf3cc30116fad1" ... EOF Important For mirrored and disconnected use cases, ensure that all of the sources filtered from your catalog index.json file are digests. If any of the sources use tags instead of digests, the Operator installation fails. Tags require an internet connection. View the imageset-config.yaml to get the catalog image reference for the CatalogSource custom resource (CR) by running the following command: USD cat imageset-config.yaml Example output kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: registry.example.com/microshift-mirror mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 1 packages: - name: amq-broker-rhel8 channels: - name: 7.11.x 1 Use the value in the mirror.catalog catalog image reference for the follwing jq command to get the image digest. In this example, <registry.redhat.io/redhat/redhat-operator-index:v4.17> . Get the SHA of the catalog index image by running the following command: USD skopeo inspect docker://<registry.redhat.io/redhat/redhat-operator-index:v4.17> | jq `.Digest` 1 1 Use the value in the mirror.catalog catalog image reference for the jq command to get the image digest. In this example, <registry.redhat.io/redhat/redhat-operator-index:v4.17> . Example output "sha256:7a76c0880a839035eb6e896d54ebd63668bb37b82040692141ba39ab4c539bc6" To get ready to add the image references to your Image Builder blueprint file, format the catalog image reference by using the following example: [[containers]] source = "registry.redhat.io/redhat/redhat-operator-index@sha256:7a76c0880a839035eb6e896d54ebd63668bb37b82040692141ba39ab4c539bc6" Add the image references from all the steps to the Image Builder blueprint. Generated Image Builder blueprint example snippet name = "microshift_blueprint" description = "MicroShift 4.17.1 on x86_64 platform" version = "0.0.1" modules = [] groups = [] [[packages]] 1 name = "microshift" version = "4.17.1" ... ... [customizations.services] 2 enabled = ["microshift"] [customizations.firewall] ports = ["22:tcp", "80:tcp", "443:tcp", "5353:udp", "6443:tcp", "30000-32767:tcp", "30000-32767:udp"] ... ... [[containers]] 3 source = "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f41e79c17e8b41f1b0a5a32c3e2dd7cd15b8274554d3f1ba12b2598a347475f4" [[containers]] source = "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbc65f1fba7d92b36cf7514cd130fe83a9bd211005ddb23a8dc479e0eea645fd" ... ... [[containers]] 4 source = "registry.redhat.io/redhat/redhat-operator-index@sha256:7a76c0880a839035eb6e896d54ebd63668bb37b82040692141ba39ab4c539bc6" ... ... [[containers]] source = "registry.redhat.io/amq7/amq-broker-init-rhel8@sha256:0dde839c2dce7cb684094bf26523c8e16677de03149a0fff468b8c3f106e1f4f" ... ... [[containers]] source = "registry.redhat.io/amq7/amq-broker-rhel8@sha256:e8fa2a00e576ecb95561ffbdbf87b1c82d479c8791ab2c6ce741dd0d0b496d15" [[containers]] source = "registry.redhat.io/amq7/amq-broker-rhel8@sha256:ff6fefad518a6c997d4c5a6e475ba89640260167f0bc27715daf3cc30116fad1" ... EOF 1 References for all non-optional MicroShift RPM packages using the same version compatible with the microshift-release-info RPM. 2 References for automatically enabling MicroShift on system startup and applying default networking settings. 3 References for all non-optional MicroShift container images necessary for a disconnected deployment. 4 References for the catalog index. 9.4.1.3. Applying catalogs and Operators in a disconnected-deployment RHEL for Edge image After you have created a RHEL for Edge image for a disconnected environment and configured MicroShift networking settings for disconnected use, you can configure the namespace and create catalog and Operator custom resources (CR) for running your Operators. Prerequisites You have a RHEL for Edge image. Networking is configured for disconnected use. You completed the oc-mirror plugin dry run procedure. Procedure Create a CatalogSource custom resource (CR), similar to the following example: Example my-catalog-source-cr.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-redhat-operator-index namespace: openshift-marketplace 1 spec: image: registry.example.com/redhat/redhat-operator-index:v4.17 sourceType: grpc displayName: publisher: updateStrategy: registryPoll: interval: 60m 1 The global namespace. Setting the metadata.namespace to openshift-marketplace enables the catalog to run in all namespaces. Subscriptions in any namespace can reference catalogs created in the openshift-marketplace namespace. Note The default pod security admission definition for openshift-marketplace is baseline , therefore a catalog source custom resource (CR) created in that namespace does not require a spec.grpcPodConfig.securityContextConfig value to be set. You can set a legacy or restricted value if required for the namespace and Operators you want to use. Add the SHA of the catalog index commit to the Catalog Source (CR), similar to the following example: Example namespace spec.image configuration apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-redhat-operator-index namespace: openshift-marketplace spec: image: registry.example.com/redhat/redhat-operator-index@sha256:7a76c0880a839035eb6e896d54ebd63668bb37b82040692141ba39ab4c539bc6 1 sourceType: grpc displayName: publisher: updateStrategy: registryPoll: interval: 60m 1 The SHA of the image commit. Use the same SHA you added to the image builder blueprint. Important You must use the SHA instead of a tag in your catalog CR or the pod fails to start. Apply the YAML file from the oc-mirror plugin dry run results directory to the cluster by running the following command: USD oc apply -f ./oc-mirror-workspace/results-1708508014/catalogSource-cs-redhat-operator-index.yaml Example output catalogsource.operators.coreos.com/cs-redhat-operator-index created Verify that the CatalogSource resources were successfully installed by running the following command: USD oc get catalogsource --all-namespaces Verify that the catalog source is running by using the following command: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE cs-redhat-operator-index-4227b 2/2 Running 0 2m5s Create a Subscription CR, similar to the following example: Example my-subscription-cr.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq-broker namespace: openshift-operators spec: channel: 7.11.x name: amq-broker-rhel8 source: cs-redhat-operator-index sourceNamespace: openshift-marketplace Apply the Subscription CR by running the following command: USD oc apply -f ./<my-subscription-cr.yaml> 1 1 Specify the name of your Subscription CR, such as my-subscription-cr.yaml . Example output subscription.operators.coreos.com/amq-broker created
[ "oc -n openshift-operator-lifecycle-manager get pod -l app=olm-operator", "NAME READY STATUS RESTARTS AGE olm-operator-85b5c6786-n6kbc 1/1 Running 0 2m24s", "oc -n openshift-operator-lifecycle-manager get pod -l app=catalog-operator", "NAME READY STATUS RESTARTS AGE catalog-operator-5fc7f857b6-tj8cf 1/1 Running 0 2m33s", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: operatorhubio-catalog namespace: openshift-marketplace 1 spec: sourceType: grpc image: quay.io/operatorhubio/catalog:latest displayName: Community Operators 2 publisher: OperatorHub.io grpcPodConfig: securityContextConfig: restricted 3 updateStrategy: registryPoll: interval: 60m", "oc apply -f <catalog_source.yaml> 1", "catalogsource.operators.coreos.com/operatorhubio-catalog created", "oc describe catalogsources.operators.coreos.com -n openshift-marketplace operatorhubio-catalog", "Name: operatorhubio-catalog Namespace: openshift-marketplace Labels: <none> Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Metadata: Creation Timestamp: 2024-01-31T09:55:31Z Generation: 1 Resource Version: 1212 UID: 4edc1a96-83cd-4de9-ac8c-c269ca895f3e Spec: Display Name: Community Operators Grpc Pod Config: Security Context Config: restricted Image: quay.io/operatorhubio/catalog:latest Publisher: OperatorHub.io Source Type: grpc Update Strategy: Registry Poll: Interval: 60m Status: Connection State: Address: operatorhubio-catalog.openshift-marketplace.svc:50051 Last Connect: 2024-01-31T09:55:57Z Last Observed State: READY 1 Registry Service: Created At: 2024-01-31T09:55:31Z Port: 50051 Protocol: grpc Service Name: operatorhubio-catalog Service Namespace: openshift-marketplace Events: <none>", "oc get pods -n openshift-marketplace -l olm.catalogSource=operatorhubio-catalog", "NAME READY STATUS RESTARTS AGE operatorhubio-catalog-x24nh 1/1 Running 0 59s", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-cert-manager namespace: openshift-operators spec: channel: stable name: cert-manager source: operatorhubio-catalog sourceNamespace: openshift-marketplace 1", "oc apply -f <subscription_cr.yaml> 1", "subscription.operators.coreos.com/my-cert-manager created", "oc get pods -n openshift-operators 1", "NAME READY STATUS RESTARTS AGE cert-manager-7df8994ddb-4vrkr 1/1 Running 0 19s cert-manager-cainjector-5746db8fd7-69442 1/1 Running 0 18s cert-manager-webhook-f858bf58b-748nt 1/1 Running 0 18s", "oc -n openshift-operator-lifecycle-manager get pod -l app=olm-operator", "NAME READY STATUS RESTARTS AGE olm-operator-85b5c6786-n6kbc 1/1 Running 0 16m", "oc -n openshift-operator-lifecycle-manager get pod -l app=catalog-operator", "NAME READY STATUS RESTARTS AGE catalog-operator-5fc7f857b6-tj8cf 1/1 Running 0 16m", "apiVersion: v1 kind: Namespace metadata: name: olm-microshift", "oc apply -f <ns.yaml> 1", "namespace/olm-microshift created", "kind: OperatorGroup apiVersion: operators.coreos.com/v1 metadata: name: og namespace: olm-microshift spec: 1 targetNamespaces: - olm-microshift", "oc apply -f <og.yaml> 1", "operatorgroup.operators.coreos.com/og created", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: operatorhubio-catalog namespace: openshift-marketplace 1 spec: sourceType: grpc image: quay.io/operatorhubio/catalog:latest displayName: Community Operators 2 publisher: OperatorHub.io grpcPodConfig: securityContextConfig: restricted 3 updateStrategy: registryPoll: interval: 60m", "oc apply -f <catalog_source.yaml> 1", "oc describe catalogsources.operators.coreos.com -n openshift-marketplace operatorhubio-catalog", "Name: operatorhubio-catalog Namespace: openshift-marketplace Labels: <none> Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Metadata: Creation Timestamp: 2024-01-31T10:09:46Z Generation: 1 Resource Version: 2811 UID: 60ce4a36-86d3-4921-b9fc-84d67c28df48 Spec: Display Name: Community Operators Grpc Pod Config: Security Context Config: restricted Image: quay.io/operatorhubio/catalog:latest Publisher: OperatorHub.io Source Type: grpc Update Strategy: Registry Poll: Interval: 60m Status: Connection State: Address: operatorhubio-catalog.openshift-marketplace.svc:50051 Last Connect: 2024-01-31T10:10:04Z Last Observed State: READY 1 Registry Service: Created At: 2024-01-31T10:09:46Z Port: 50051 Protocol: grpc Service Name: operatorhubio-catalog Service Namespace: openshift-marketplace Events: <none>", "oc get pods -n openshift-marketplace -l olm.catalogSource=operatorhubio-catalog", "NAME READY STATUS RESTARTS AGE operatorhubio-catalog-j7sc8 1/1 Running 0 43s", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-gitlab-operator-kubernetes namespace: olm-microshift 1 spec: channel: stable name: gitlab-operator-kubernetes source: operatorhubio-catalog sourceNamespace: openshift-marketplace 2", "oc apply -f <subscription_cr.yaml> 1", "subscription.operators.coreos.com/my-gitlab-operator-kubernetes", "oc get pods -n olm-microshift 1", "NAME READY STATUS RESTARTS AGE gitlab-controller-manager-69bb6df7d6-g7ntx 2/2 Running 0 3m24s", "oc mirror list operators --version 4.18 --catalogs", "oc mirror list operators <--catalog=<catalog_source>> 1", "oc mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.17 --package=amq-broker-rhel8", "oc mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.17 --package=amq-broker-rhel8 --channel=7.11.x", "oc mirror init --registry <storage_backend> > imageset-config.yaml 1", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: registry.example.com/oc-mirror skipTLS: false mirror: platform: 1 channels: - name: stable-4.18 type: ocp operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: serverless-operator channels: - name: stable additionalImages: 2 - name: registry.redhat.io/ubi8/ubi:latest helm: {} 3", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: 1 registry: imageURL: <storage_backend> 2 skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 3 packages: - name: amq-broker-rhel8 4 channels: - name: 7.11.x 5", "additionalImages: - name: registry.redhat.io/ubi8/ubi:latest", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: elasticsearch-operator minVersion: '2.4.0'", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: elasticsearch-operator minVersion: '5.2.3-31'", "oc mirror --config=./<imageset-config.yaml> \\ 1 docker://registry.example:5000 2", "Rendering catalog image \"registry.example.com/redhat/redhat-operator-index:v{ocp-version}\" with file-based catalog", "cat oc-mirror-workspace/<results-directory>/imageContentSourcePolicy.yaml 1", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: labels: operators.openshift.org/catalog: \"true\" name: operator-0 spec: repositoryDigestMirrors: - mirrors: - registry.<example.com>/amq7 source: registry.redhat.io/amq7", "yq '.spec.repositoryDigestMirrors[] as USDitem ireduce([]; . + [{\"mirror\": USDitem.mirrors[], \"source\": (USDitem | .source)}]) | .[] | \"[[registry]] prefix = \\\"\" + .source + \"\\\" location = \\\"\" + .mirror + \"\\\" mirror-by-digest-only = true insecure = true \"' ./icsp.yaml", "[[registry]] prefix = \"registry.redhat.io/amq7\" location = \"registry.example.com/amq7\" mirror-by-digest-only = true insecure = true", "[[registry]] prefix = \"registry.redhat.io/amq7\" location = \"registry.example.com/amq7\" mirror-by-digest-only = true insecure = true [[registry]] prefix = \"\" location = \"quay.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"<registry_host>:<port>\" 1 insecure = false", "sudo systemctl restart crio", "oc apply -f ./oc-mirror-workspace/results-1708508014/catalogSource-cs-redhat-operator-index.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-catalog namespace: openshift-marketplace 1 spec: sourceType: grpc image: registry.example.com/redhat/redhat-operator-index:v4.17 updateStrategy: registryPoll: interval: 60m", "catalogsource.operators.coreos.com/cs-redhat-operator-index created", "oc get catalogsource --all-namespaces", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE cs-redhat-operator-index-4227b 2/2 Running 0 2m5s", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq-broker namespace: openshift-operators spec: channel: 7.11.x name: amq-broker-rhel8 source: cs-redhat-operator-index sourceNamespace: openshift-marketplace", "oc apply -f ./<my-subscription-cr.yaml> 1", "subscription.operators.coreos.com/amq-broker created", "oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 \\ 2 --dry-run 3", "Checking push permissions for registry.example:5000 Creating directory: oc-mirror-workspace/src/publish Creating directory: oc-mirror-workspace/src/v2 Creating directory: oc-mirror-workspace/src/charts Creating directory: oc-mirror-workspace/src/release-signatures No metadata detected, creating new workspace wrote mirroring manifests to oc-mirror-workspace/operators.1658342351/manifests-redhat-operator-index info: Planning completed in 31.48s info: Dry run complete Writing image mapping to oc-mirror-workspace/mapping.txt", "cd oc-mirror-workspace/", "jq -r --slurp '.[] | select(.relatedImages != null) | \"[[containers]]\\nsource = \\\"\" + .relatedImages[].image + \"\\\"\\n\"' ./oc-mirror-workspace/src/catalogs/registry.redhat.io/redhat/redhat-operator-index/v4.17/index/index.json", "jq -r --slurp '.[] | select(.relatedImages != null) | .relatedImages[] | select(.name | contains(\"ppc\") or contains(\"s390x\") | not) | \"[[containers]]\\\\nsource = \\\\\"\" + .image + \"\\\\\"\\\\n\"' ./oc-mirror-workspace/src/catalogs/registry.redhat.io/redhat/redhat-operator-index/v4.17/index/index.json", "[[containers]] source = \"registry.redhat.io/amq7/amq-broker-init-rhel8@sha256:0b2126cfb6054fdf428c1f43b69e36e93a09a49ce15350e9273c98cc08c6598b\" [[containers]] source = \"registry.redhat.io/amq7/amq-broker-init-rhel8@sha256:0dde839c2dce7cb684094bf26523c8e16677de03149a0fff468b8c3f106e1f4f\" [[containers]] source = \"registry.redhat.io/amq7/amq-broker-rhel8@sha256:e8fa2a00e576ecb95561ffbdbf87b1c82d479c8791ab2c6ce741dd0d0b496d15\" [[containers]] source = \"registry.redhat.io/amq7/amq-broker-rhel8@sha256:ff6fefad518a6c997d4c5a6e475ba89640260167f0bc27715daf3cc30116fad1\" ... EOF", "cat imageset-config.yaml", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: registry.example.com/microshift-mirror mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 1 packages: - name: amq-broker-rhel8 channels: - name: 7.11.x", "skopeo inspect docker://<registry.redhat.io/redhat/redhat-operator-index:v4.17> | jq `.Digest` 1", "\"sha256:7a76c0880a839035eb6e896d54ebd63668bb37b82040692141ba39ab4c539bc6\"", "[[containers]] source = \"registry.redhat.io/redhat/redhat-operator-index@sha256:7a76c0880a839035eb6e896d54ebd63668bb37b82040692141ba39ab4c539bc6\"", "name = \"microshift_blueprint\" description = \"MicroShift 4.17.1 on x86_64 platform\" version = \"0.0.1\" modules = [] groups = [] [[packages]] 1 name = \"microshift\" version = \"4.17.1\" [customizations.services] 2 enabled = [\"microshift\"] [customizations.firewall] ports = [\"22:tcp\", \"80:tcp\", \"443:tcp\", \"5353:udp\", \"6443:tcp\", \"30000-32767:tcp\", \"30000-32767:udp\"] [[containers]] 3 source = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f41e79c17e8b41f1b0a5a32c3e2dd7cd15b8274554d3f1ba12b2598a347475f4\" [[containers]] source = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbc65f1fba7d92b36cf7514cd130fe83a9bd211005ddb23a8dc479e0eea645fd\" [[containers]] 4 source = \"registry.redhat.io/redhat/redhat-operator-index@sha256:7a76c0880a839035eb6e896d54ebd63668bb37b82040692141ba39ab4c539bc6\" [[containers]] source = \"registry.redhat.io/amq7/amq-broker-init-rhel8@sha256:0dde839c2dce7cb684094bf26523c8e16677de03149a0fff468b8c3f106e1f4f\" [[containers]] source = \"registry.redhat.io/amq7/amq-broker-rhel8@sha256:e8fa2a00e576ecb95561ffbdbf87b1c82d479c8791ab2c6ce741dd0d0b496d15\" [[containers]] source = \"registry.redhat.io/amq7/amq-broker-rhel8@sha256:ff6fefad518a6c997d4c5a6e475ba89640260167f0bc27715daf3cc30116fad1\" ... EOF", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-redhat-operator-index namespace: openshift-marketplace 1 spec: image: registry.example.com/redhat/redhat-operator-index:v4.17 sourceType: grpc displayName: publisher: updateStrategy: registryPoll: interval: 60m", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-redhat-operator-index namespace: openshift-marketplace spec: image: registry.example.com/redhat/redhat-operator-index@sha256:7a76c0880a839035eb6e896d54ebd63668bb37b82040692141ba39ab4c539bc6 1 sourceType: grpc displayName: publisher: updateStrategy: registryPoll: interval: 60m", "oc apply -f ./oc-mirror-workspace/results-1708508014/catalogSource-cs-redhat-operator-index.yaml", "catalogsource.operators.coreos.com/cs-redhat-operator-index created", "oc get catalogsource --all-namespaces", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE cs-redhat-operator-index-4227b 2/2 Running 0 2m5s", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq-broker namespace: openshift-operators spec: channel: 7.11.x name: amq-broker-rhel8 source: cs-redhat-operator-index sourceNamespace: openshift-marketplace", "oc apply -f ./<my-subscription-cr.yaml> 1", "subscription.operators.coreos.com/amq-broker created" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/running_applications/operators
probe::vm.pagefault
probe::vm.pagefault Name probe::vm.pagefault - Records that a page fault occurred Synopsis vm.pagefault Values address the address of the faulting memory access; i.e. the address that caused the page fault write_access indicates whether this was a write or read access; 1 indicates a write, while 0 indicates a read name name of the probe point Context The process which triggered the fault
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-vm-pagefault
Chapter 5. Visualizing power monitoring metrics
Chapter 5. Visualizing power monitoring metrics Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can visualize power monitoring metrics in the OpenShift Container Platform web console by accessing power monitoring dashboards or by exploring Metrics under the Observe tab. 5.1. Power monitoring dashboards overview There are two types of power monitoring dashboards. Both provide different levels of details around power consumption metrics for a single cluster: Power Monitoring / Overview dashboard With this dashboard, you can observe the following information: An aggregated view of CPU architecture and its power source ( rapl-sysfs , rapl-msr , or estimator ) along with total nodes with this configuration Total energy consumption by a cluster in the last 24 hours (measured in kilowatt-hour) The amount of power consumed by the top 10 namespaces in a cluster in the last 24 hours Detailed node information, such as its CPU architecture and component power source These features allow you to effectively monitor the energy consumption of the cluster without needing to investigate each namespace separately. Warning Ensure that the Components Source column does not display estimator as the power source. Figure 5.1. The Detailed Node Information table with rapl-sysfs as the component power source If Kepler is unable to obtain hardware power consumption metrics, the Components Source column displays estimator as the power source, which is not supported in Technology Preview. If that happens, then the values from the nodes are not accurate. Power Monitoring / Namespace dashboard This dashboard allows you to view metrics by namespace and pod. You can observe the following information: The power consumption metrics, such as consumption in DRAM and PKG The energy consumption metrics in the last hour, such as consumption in DRAM and PKG for core and uncore components This feature allows you to investigate key peaks and easily identify the primary root causes of high consumption. 5.2. Accessing power monitoring dashboards as a cluster administrator You can access power monitoring dashboards from the Administrator perspective of the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. You have installed the Power monitoring Operator. You have deployed Kepler in your cluster. You have enabled monitoring for user-defined projects. Procedure In the Administrator perspective of the web console, go to Observe Dashboards . From the Dashboard drop-down list, select the power monitoring dashboard you want to see: Power Monitoring / Overview Power Monitoring / Namespace 5.3. Accessing power monitoring dashboards as a developer You can access power monitoring dashboards from the Developer perspective of the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a developer or as a user. You have installed the Power monitoring Operator. You have deployed Kepler in your cluster. You have enabled monitoring for user-defined projects. You have view permissions for the namespace openshift-power-monitoring , the namespace where Kepler is deployed to. Procedure In the Developer perspective of the web console, go to Observe Dashboard . From the Dashboard drop-down list, select the power monitoring dashboard you want to see: Power Monitoring / Overview 5.4. Power monitoring metrics overview The Power monitoring Operator exposes the following metrics, which you can view by using the OpenShift Container Platform web console under the Observe Metrics tab. Warning This list of exposed metrics is not definitive. Metrics might be added or removed in future releases. Table 5.1. Power monitoring Operator metrics Metric name Description kepler_container_joules_total The aggregated package or socket energy consumption of CPU, DRAM, and other host components by a container. kepler_container_core_joules_total The total energy consumption across CPU cores used by a container. If the system has access to RAPL_ metrics, this metric reflects the proportional container energy consumption of the RAPL Power Plan 0 (PP0), which is the energy consumed by all CPU cores in the socket. kepler_container_dram_joules_total The total energy consumption of DRAM by a container. kepler_container_uncore_joules_total The cumulative energy consumption by uncore components used by a container. The number of components might vary depending on the system. The uncore metric is processor model-specific and might not be available on some server CPUs. kepler_container_package_joules_total The cumulative energy consumed by the CPU socket used by a container. It includes all core and uncore components. kepler_container_other_joules_total The cumulative energy consumption of host components, excluding CPU and DRAM, used by a container. Generally, this metric is the energy consumption of ACPI hosts. kepler_container_bpf_cpu_time_us_total The total CPU time used by the container that utilizes the BPF tracing. kepler_container_cpu_cycles_total The total CPU cycles used by the container that utilizes hardware counters. CPU cycles is a metric directly related to CPU frequency. On systems where processors run at a fixed frequency, CPU cycles and total CPU time are roughly equivalent. On systems where processors run at varying frequencies, CPU cycles and total CPU time have different values. kepler_container_cpu_instructions_total The total CPU instructions used by the container that utilizes hardware counters. CPU instructions is a metric that accounts how the CPU is used. kepler_container_cache_miss_total The total cache miss that occurs for a container that uses hardware counters. kepler_container_cgroupfs_cpu_usage_us_total The total CPU time used by a container reading from control group statistics. kepler_container_cgroupfs_memory_usage_bytes_total The total memory in bytes used by a container reading from control group statistics. kepler_container_cgroupfs_system_cpu_usage_us_total The total CPU time in kernel space used by the container reading from control group statistics. kepler_container_cgroupfs_user_cpu_usage_us_total The total CPU time in user space used by a container reading from control group statistics. kepler_container_bpf_net_tx_irq_total The total number of packets transmitted to network cards of a container that uses the BPF tracing. kepler_container_bpf_net_rx_irq_total The total number of packets received from network cards of a container that uses the BPF tracing. kepler_container_bpf_block_irq_total The total number of block I/O calls of a container that uses the BPF tracing. kepler_node_info The node metadata, such as the node CPU architecture. kepler_node_core_joules_total The total energy consumption across CPU cores used by all containers running on a node and operating system. kepler_node_uncore_joules_total The cumulative energy consumption by uncore components used by all containers running on the node and operating system. The number of components might vary depending on the system. kepler_node_dram_joules_total The total energy consumption of DRAM by all containers running on the node and operating system. kepler_node_package_joules_total The cumulative energy consumed by the CPU socket used by all containers running on the node and operating system. It includes all core and uncore components. kepler_node_other_host_components_joules_total The cumulative energy consumption of host components, excluding CPU and DRAM, used by all containers running on the node and operating system. Generally, this metric is the energy consumption of ACPI hosts. kepler_node_platform_joules_total The total energy consumption of the host. Generally, this metric is the host energy consumption from Redfish BMC or ACPI. kepler_node_energy_stat Multiple metrics from nodes labeled with container resource utilization control group metrics that are used in the model server. kepler_node_accelerator_intel_qat The utilization of the accelerator Intel QAT on a certain node. If the system contains Intel QATs, Kepler can calculate the utilization of the node's QATs through telemetry. 5.5. Additional resources Enabling monitoring for user-defined projects
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/power_monitoring/visualizing-power-monitoring-metrics
OpenShift sandboxed containers
OpenShift sandboxed containers OpenShift Container Platform 4.13 OpenShift sandboxed containers guide Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/openshift_sandboxed_containers/index
9.9. Configuring Resources to Remain Stopped on Clean Node Shutdown (Red Hat Enterprise Linux 7.8 and later)
9.9. Configuring Resources to Remain Stopped on Clean Node Shutdown (Red Hat Enterprise Linux 7.8 and later) When a cluster node shuts down, Pacemaker's default response is to stop all resources running on that node and recover them elsewhere, even if the shutdown is a clean shutdown. As of Red Hat Enterprise Linux 7.8, you can configure Pacemaker so that when a node shuts down cleanly, the resources attached to the node will be locked to the node and unable to start elsewhere until they start again when the node that has shut down rejoins the cluster. This allows you to power down nodes during maintenance windows when service outages are acceptable without causing that node's resources to fail over to other nodes in the cluster. 9.9.1. Cluster Properties to Configure Resources to Remain Stopped on Clean Node Shutdown The ability to prevent resources from failing over on a clean node shutdown is implemented by means of the following cluster properties. shutdown-lock When this cluster property is set to the default value of false , the cluster will recover resources that are active on nodes being cleanly shut down. When this property is set to true , resources that are active on the nodes being cleanly shut down are unable to start elsewhere until they start on the node again after it rejoins the cluster. The shutdown-lock property will work for either cluster nodes or remote nodes, but not guest nodes. If shutdown-lock is set to true , you can remove the lock on one cluster resource when a node is down so that the resource can start elsewhere by performing a manual refresh on the node with the following command. Note that once the resources are unlocked, the cluster is free to move the resources elsewhere. You can control the likelihood of this occurring by using stickiness values or location preferences for the resource. Note A manual refresh will work with remote nodes only if you first run the following commands: Run the systemctl stop pacemaker_remote command on the remote node to stop the node. Run the pcs resource disable remote-connection-resource command. You can then perform a manual refresh on the remote node. shutdown-lock-limit When this cluster property is set to a time other than the default value of 0, resources will be available for recovery on other nodes if the node does not rejoin within the specified time since the shutdown was initiated. Note, however, that the time interval will not be checked any more often than the value of the cluster-recheck-interval cluster property. Note The shutdown-lock-limit property will work with remote nodes only if you first run the following commands: Run the systemctl stop pacemaker_remote command on the remote node to stop the node. Run the pcs resource disable remote-connection-resource command. After you run these commands, the resources that had been running on the remote node will be available for recovery on other nodes when the amount of time specified as the shutdown-lock-limit has passed. 9.9.2. Setting the shutdown-lock Cluster Property The following example sets the shutdown-lock cluster property to true in an example cluster and shows the effect this has when the node is shut down and started again. This example cluster consists of three nodes: z1.example.com , z2.example.com , and z3.example.com . Set the shutdown-lock property to to true and verify its value. In this example the shutdown-lock-limit property maintains its default value of 0. Check the status of the cluster. In this example, resources third and fifth are running on z1.example.com . Shut down z1.example.com , which will stop the resources that are running on that node. Running the pcs status command shows that node z1.example.com is offline and that the resources that had been running on z1.example.com are LOCKED while the node is down. Start cluster services again on z1.example.com so that it rejoins the cluster. Locked resources should get started on that node, although once they start they will not not necessarily remain on the same node. In this example, resouces third and fifth are recovered on node z1.example.com.
[ "pcs resource refresh resource --node node", "pcs property set shutdown-lock=true pcs property list --all | grep shutdown-lock shutdown-lock: true shutdown-lock-limit: 0", "pcs status Full List of Resources: * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Started z1.example.com * fourth (ocf::pacemaker:Dummy): Started z2.example.com * fifth (ocf::pacemaker:Dummy): Started z1.example.com", "pcs cluster stop z1.example.com Stopping Cluster (pacemaker) Stopping Cluster (corosync)", "pcs status Node List: * Online: [ z2.example.com z3.example.com ] * OFFLINE: [ z1.example.com ] Full List of Resources: * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Stopped z1.example.com (LOCKED) * fourth (ocf::pacemaker:Dummy): Started z3.example.com * fifth (ocf::pacemaker:Dummy): Stopped z1.example.com (LOCKED)", "pcs cluster start z1.example.com Starting Cluster", "pcs status Node List: * Online: [ z1.example.com z2.example.com z3.example.com ] Full List of Resources: .. * first (ocf::pacemaker:Dummy): Started z3.example.com * second (ocf::pacemaker:Dummy): Started z2.example.com * third (ocf::pacemaker:Dummy): Started z1.example.com * fourth (ocf::pacemaker:Dummy): Started z3.example.com * fifth (ocf::pacemaker:Dummy): Started z1.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-shutdown-lock-HAAR
Chapter 14. DHCP Servers
Chapter 14. DHCP Servers Dynamic Host Configuration Protocol ( DHCP ) is a network protocol that automatically assigns TCP/IP information to client machines. Each DHCP client connects to the centrally located DHCP server, which returns the network configuration (including the IP address, gateway, and DNS servers) of that client. 14.1. Why Use DHCP? DHCP is useful for automatic configuration of client network interfaces. When configuring the client system, you can choose DHCP instead of specifying an IP address, netmask, gateway, or DNS servers. The client retrieves this information from the DHCP server. DHCP is also useful if you want to change the IP addresses of a large number of systems. Instead of reconfiguring all the systems, you can just edit one configuration file on the server for the new set of IP addresses. If the DNS servers for an organization changes, the changes happen on the DHCP server, not on the DHCP clients. When you restart the network or reboot the clients, the changes go into effect. If an organization has a functional DHCP server correctly connected to a network, laptops and other mobile computer users can move these devices from office to office. Note that administrators of DNS and DHCP servers, as well as any provisioning applications, should agree on the host name format used in an organization. See Section 6.1.1, "Recommended Naming Practices" for more information on the format of host names.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/ch-dhcp_servers
Chapter 10. Adding metadata to instances
Chapter 10. Adding metadata to instances The Compute (nova) service uses metadata to pass configuration information to instances on launch. The instance can access the metadata by using a config drive or the metadata service. Config drive Config drives are special drives that you can attach to an instance when it boots. The config drive is presented to the instance as a read-only drive. The instance can mount this drive and read files from it to get information that is normally available through the metadata service. Metadata service The Compute service provides the metadata service as a REST API, which can be used to retrieve data specific to an instance. Instances access this service at 169.254.169.254 or at fe80::a9fe:a9fe . 10.1. Types of instance metadata Cloud users, cloud administrators, and the Compute service can pass metadata to instances: Cloud user provided data Cloud users can specify additional data to use when they launch an instance, such as a shell script that the instance runs on boot. The cloud user can pass data to instances by using the user data feature, and by passing key-value pairs as required properties when creating or updating an instance. Cloud administrator provided data The RHOSP administrator uses the vendordata feature to pass data to instances. The Compute service provides the vendordata modules StaticJSON and DynamicJSON to allow administrators to pass metadata to instances: StaticJSON : (Default) Use for metadata that is the same for all instances. DynamicJSON : Use for metadata that is different for each instance. This module makes a request to an external REST service to determine what metadata to add to an instance. Vendordata configuration is located in one of the following read-only files on the instance: /openstack/{version}/vendor_data.json /openstack/{version}/vendor_data2.json Compute service provided data The Compute service uses its internal implementation of the metadata service to pass information to the instance, such as the requested hostname for the instance, and the availability zone the instance is in. This happens by default and requires no configuration by the cloud user or administrator. 10.2. Adding a config drive to all instances As an administrator, you can configure the Compute service to always create a config drive for instances, and populate the config drive with metadata that is specific to your deployment. For example, you might use a config drive for the following reasons: To pass a networking configuration when your deployment does not use DHCP to assign IP addresses to instances. You can pass the IP address configuration for the instance through the config drive, which the instance can mount and access before you configure the network settings for the instance. To pass data to an instance that is not known to the user starting the instance, for example, a cryptographic token to be used to register the instance with Active Directory post boot. To create a local cached disk read to manage the load of instance requests, which reduces the impact of instances accessing the metadata servers regularly to check in and build facts. Any instance operating system that is capable of mounting an ISO 9660 or VFAT file system can use the config drive. Procedure Open your Compute environment file. To always attach a config drive when launching an instance, set the following parameter to True : Optional: To change the format of the config drive from the default value of iso9660 to vfat , add the config_drive_format parameter to your configuration: Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: Verification Create an instance: Log in to the instance. Mount the config drive: If the instance OS uses udev : If the instance OS does not use udev , you need to first identify the block device that corresponds to the config drive: Inspect the files in the mounted config drive directory, mnt/config/openstack/{version}/ , for your metadata. 10.3. Adding dynamic metadata to instances You can configure your deployment to create instance-specific metadata, and make the metadata available to that instance through a JSON file. Tip You can use dynamic metadata on the undercloud to integrate director with a Red Hat Identity Management (IdM) server. An IdM server can be used as a certificate authority and manage the overcloud certificates when SSL/TLS is enabled on the overcloud. For more information, see Implementing TLS-e with Ansible in the Security and Hardening Guide . Procedure Open your Compute environment file. Add DynamicJSON to the vendordata provider module: Specify the REST services to contact to generate the metadata. You can specify as many target REST services as required, for example: The Compute service generates the JSON file, vendordata2.json , to contain the metadata retrieved from the configured target services, and stores it in the config drive directory. Note Do not use the same name for a target service more than once. Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud:
[ "parameter_defaults: ComputeExtraConfig: nova::compute::force_config_drive: 'true'", "parameter_defaults: ComputeExtraConfig: nova::compute::force_config_drive: 'true' nova::compute::config_drive_format: vfat", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml \\", "(overcloud)USD openstack server create --flavor m1.tiny --image cirros test-config-drive-instance", "mkdir -p /mnt/config mount /dev/disk/by-label/config-2 /mnt/config", "blkid -t LABEL=\"config-2\" -odevice /dev/vdb mkdir -p /mnt/config mount /dev/vdb /mnt/config", "parameter_defaults: ControllerExtraConfig: nova::vendordata::vendordata_providers: - DynamicJSON", "parameter_defaults: ControllerExtraConfig: nova::vendordata::vendordata_providers: - DynamicJSON nova::vendordata::vendordata_dynamic_targets: \"target1@http://127.0.0.1:125\" nova::vendordata::vendordata_dynamic_targets: \"target2@http://127.0.0.1:126\"", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuring_the_compute_service_for_instance_creation/assembly_adding-metadata-to-instances_instance-metadata
Chapter 1. Air-gapped environment
Chapter 1. Air-gapped environment An air-gapped environment, also known as an air-gapped network or isolated network, ensures security by physically segregating the system or network. This isolation is established to prevent unauthorized access, data transfer, or communication between the air-gapped system and external sources. You can install the Red Hat Developer Hub in an air-gapped environment to ensure security and meet specific regulatory requirements.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/installing_red_hat_developer_hub_in_an_air-gapped_environment/con-airgapped-environment_title-install-rhdh-air-grapped
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_multiple_openshift_data_foundation_storage_clusters/making-open-source-more-inclusive
6.13. Red Hat Virtualization 4.4 Batch Update 1 (ovirt-4.4.2)
6.13. Red Hat Virtualization 4.4 Batch Update 1 (ovirt-4.4.2) 6.13.1. Bug Fix These bugs were fixed in this release of Red Hat Virtualization: BZ# 1663135 Previously, virtual machine (VM) imports from sparse storage assumed the target also used sparse storage. However, block storage does not support sparse allocation. The current release fixes this issue: Imports to block storage for COW image file formats preserve sparse allocation types and work as expected. BZ# 1740058 Before this update, when you ran a VM that was previously powered off, the VDSM log contained many uninformative warnings. This update resolves the issue and these warnings no longer appear in the VDSM log. BZ# 1793290 Previously, the partition number was not removed from the disk path, so the disk mapping pointed to an arbitrary partition on the disk, instead of the disk itself. The current release fixes this issue: Disk mapping contains only disk paths. BZ# 1843234 Before this update, when using Firefox 74.0.1 and greater with Autofill enabled, the Administration Portal password was used to autofill the Sysprep Administrator password field in the Initial Run tab of the Run Virtual Machine(s) dialog. Validation of the dialog failed because the password did not match the Verify admin password field, which was not autofilled. This issue has been resolved, and the browser no longer uses Autofill for the Sysprep admin password field. BZ# 1855761 Firefox 68 ESR does not support several standard units in the <svg> tag. (For more information, see 1287054 .) Consequently, before this update, aggregated status card icons appeared larger than intended. This update uses supported units to size icons, and as a result, icons appear correctly in FireFox 68 ESR and later. BZ# 1866956 Before this update, when the Blank template was set with HA enabled, a backup of the RHVM virtual machine saved this setting. This setting prevented deployment of the RHVM virtual machine during the restore operation. Consequently, upgrading to Red Hat Virtualization 4.4 failed. This update disables the HA setting on the RHVM virtual machine during self-hosted engine deployment, and as a result, the upgrade to 4.4 succeeds. BZ# 1867038 Previously, restoring from backup or upgrading from RHV 4.3 to RHV 4.4 failed while restoring SSO configuration requiring the gssapi module. In this release, the mod_auth_gssapi package is included in the RHV Manager appliance, and upgrading or restoring from backup succeeds even when SSO configuration is included. BZ# 1869209 Before this update, adding hosts with newer Intel CPUs to IBRS family clusters could fail, and the spec_ctrl flag was not detected. This update resolves the issue and you can now add hosts with modern Intel CPUs to the IBRS family clusters and the spec_ctrl flag is detected. BZ# 1869307 Previously, vim-enhanced package installation failed on Red Hat Virtualization Host 4.4. In this release, vim-enhanced package installation succeeds. BZ# 1870122 Previously, when upgrading a self-hosted engine from RHV 4.3 to RHV 4.4, Grafana was installed by default during the engine-setup process, and if the remote database option was selected for Data Warehouse setup, the upgrade failed. In this release, Grafana deployment is disabled by default in self-hosted engine installations, and the upgrade process succeeds. BZ# 1871235 Before this update, a virtual machine that was set with a High Performance profile using the REST API could not start if it had any USB devices, because the High Performance profile disabled the USB controller. Additionally, hosts in clusters with compatibility level 4.3 did not report the TSC frequency. This update resolves these issues. TSC is no longer present for 4.3 clusters and the VM won't have USB devices when there is no USB controller, allowing VMs to run normally. BZ# 1875851 Firefox 68 ESR does not support several standard units in the <svg> tag. (For more information, see 1287054 .) Consequently, before this update, aggregated status card icons appeared larger than intended. This update uses supported units to size icons, and as a result, icons appear correctly in FireFox 68 ESR and later. 6.13.2. Enhancements This release of Red Hat Virtualization features the following enhancements: BZ# 1749803 This enhancement enables you to set the same target domain for multiple disks. Previously, when moving or copying multiple disks, you needed to set the target domain for each disk separately. Now, if a common target domain exists, you can set it as the target domain for all disks. If there is no common storage domain, such that not all disks are moved or copied to the same storage domain, set the common target domain as 'Mixed'. BZ# 1819260 The following search filter properties for Storage Domains have been enhanced: - 'size' changed to 'free_size' - 'total_size' added to the search engine options - 'used' changed to 'used_size' For example , you can use now the following in the Storage Domains tab: free_size 6 GB and total_size < 20 GB 6.13.3. Known Issues These known issues exist in Red Hat Virtualization at this time: BZ# 1674497 Previously, hot-unplugging memory on RHEL 8 guests generated a error because the memory DIMM was in use. This prevented the removal of that memory from that VM. To work around this issue, add movable_node by setting the virtual machine's kernel command-line parameters, as described here . BZ# 1837864 When upgrading from Red Hat Virtualization 4.4 GA (RHV 4.4.1) to RHEV 4.4.2, the host enters emergency mode and cannot be restarted. Workaround: see the solution in https://access.redhat.com/solutions/5428651 BZ# 1850378 When you upgrade Red Hat Virtualization from 4.3 to 4.4 with a storage domain that is locally mounted on / (root), the upgrade fails. Specifically, on the host it appears that the upgrade is successful, but the host's status on the Administration Portal, is NonOperational . Local storage should always be defined on a file system that is separate from / (root). Use a separate logical volume or disk, to prevent possible loss of data during upgrades. If you are using / (root) as the locally mounted storage domain, migrate your data to a separate logical volume or disk prior to upgrading.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/red_hat_virtualization_4_4_batch_update_1_ovirt_4_4_2
Chapter 8. Postinstallation network configuration
Chapter 8. Postinstallation network configuration After installing OpenShift Container Platform, you can further expand and customize your network to your requirements. 8.1. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. Note After cluster installation, you cannot modify the fields listed in the section. 8.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 8.3. Setting DNS to private After you deploy a cluster, you can modify its DNS to use only a private zone. Procedure Review the DNS custom resource for your cluster: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {} Note that the spec section contains both a private and a public zone. Patch the DNS custom resource to remove the public zone: USD oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patched Because the Ingress Controller consults the DNS definition when it creates Ingress objects, when you create or modify Ingress objects, only private records are created. Important DNS records for the existing Ingress objects are not modified when you remove the public zone. Optional: Review the DNS custom resource for your cluster and confirm that the public zone was removed: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {} 8.4. Configuring ingress cluster traffic OpenShift Container Platform provides the following methods for communicating from outside the cluster with services running in the cluster: If you have HTTP/HTTPS, use an Ingress Controller. If you have a TLS-encrypted protocol other than HTTPS, such as TLS with the SNI header, use an Ingress Controller. Otherwise, use a load balancer, an external IP, or a node port. Method Purpose Use an Ingress Controller Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS, such as TLS with the SNI header. Automatically assign an external IP by using a load balancer service Allows traffic to non-standard ports through an IP address assigned from a pool. Manually assign an external IP to a service Allows traffic to non-standard ports through a specific IP address. Configure a NodePort Expose a service on all nodes in the cluster. 8.5. Configuring the node port service range As a cluster administrator, you can expand the available node port range. If your cluster uses of a large number of node ports, you might need to increase the number of available ports. The default port range is 30000-32767 . You can never reduce the port range, even if you first expand it beyond the default range. 8.5.1. Prerequisites Your cluster infrastructure must allow access to the ports that you specify within the expanded range. For example, if you expand the node port range to 30000-32900 , the inclusive port range of 32768-32900 must be allowed by your firewall or packet filtering configuration. 8.5.1.1. Expanding the node port range You can expand the node port range for the cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To expand the node port range, enter the following command. Replace <port> with the largest port number in the new range. USD oc patch network.config.openshift.io cluster --type=merge -p \ '{ "spec": { "serviceNodePortRange": "30000-<port>" } }' Tip You can alternatively apply the following YAML to update the node port range: apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: "30000-<port>" Example output network.config.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply. USD oc get configmaps -n openshift-kube-apiserver config \ -o jsonpath="{.data['config\.yaml']}" | \ grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]' Example output "service-node-port-range":["30000-33000"] 8.6. Configuring IPsec encryption With IPsec enabled, all network traffic between nodes on the OVN-Kubernetes Container Network Interface (CNI) cluster network travels through an encrypted tunnel. IPsec is disabled by default. 8.6.1. Prerequisites Your cluster must use the OVN-Kubernetes cluster network provider. 8.6.1.1. Enabling IPsec encryption As a cluster administrator, you can enable IPsec encryption after cluster installation. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. You have reduced the size of your cluster MTU by 46 bytes to allow for the overhead of the IPsec ESP header. Procedure To enable IPsec encryption, enter the following command: USD oc patch networks.operator.openshift.io cluster --type=merge \ -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipsecConfig":{ }}}}}' 8.6.1.2. Verifying that IPsec is enabled As a cluster administrator, you can verify that IPsec is enabled. Verification To find the names of the OVN-Kubernetes control plane pods, enter the following command: USD oc get pods -n openshift-ovn-kubernetes | grep ovnkube-master Example output ovnkube-master-4496s 1/1 Running 0 6h39m ovnkube-master-d6cht 1/1 Running 0 6h42m ovnkube-master-skblc 1/1 Running 0 6h51m ovnkube-master-vf8rf 1/1 Running 0 6h51m ovnkube-master-w7hjr 1/1 Running 0 6h51m ovnkube-master-zsk7x 1/1 Running 0 6h42m Verify that IPsec is enabled on your cluster: USD oc -n openshift-ovn-kubernetes -c nbdb rsh ovnkube-master-<XXXXX> \ ovn-nbctl --no-leader-only get nb_global . ipsec where: <XXXXX> Specifies the random sequence of letters for a pod from the step. Example output true 8.7. Configuring network policy As a cluster administrator or project administrator, you can configure network policies for a project. 8.7.1. About network policy In a cluster using a Kubernetes Container Network Interface (CNI) plugin that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift Container Platform 4.11, OpenShift SDN supports using network policy in its default network isolation mode. Warning Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules. Network policies cannot block traffic from localhost or from their resident nodes. By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible. A network policy applies to only the TCP, UDP, ICMP, and SCTP protocols. Other protocols are not affected. The following example NetworkPolicy objects demonstrate supporting different scenarios: Deny all traffic: To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: [] Only allow connections from the OpenShift Container Platform Ingress Controller: To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following NetworkPolicy object. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress Only accept connections from pods within a project: To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} Only allow HTTP and HTTPS traffic based on pod labels: To enable only HTTP and HTTPS access to the pods with a specific label ( role=frontend in following example), add a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443 Accept connections by using both namespace and pod selectors: To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements. For example, for the NetworkPolicy objects defined in samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend , to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace. 8.7.1.1. Using the allow-from-router network policy Use the following NetworkPolicy to allow external traffic regardless of the router configuration: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" 1 podSelector: {} policyTypes: - Ingress 1 policy-group.network.openshift.io/ingress:"" label supports both OpenShift-SDN and OVN-Kubernetes. 8.7.1.2. Using the allow-from-hostnetwork network policy Add the following allow-from-hostnetwork NetworkPolicy object to direct traffic from the host network pods: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: "" podSelector: {} policyTypes: - Ingress 8.7.2. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 8.7.3. Creating a network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the network policy file name. Define a network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: [] Allow ingress from all pods in the same namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} To create the network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/default-deny created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 8.7.4. Configuring multitenant isolation by using network policy You can configure your project to isolate it from pods and services in other project namespaces. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. Procedure Create the following NetworkPolicy objects: A policy named allow-from-openshift-ingress . USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" podSelector: {} policyTypes: - Ingress EOF Note policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OpenShift SDN. You can use the network.openshift.io/policy-group: ingress namespace selector label, but this is a legacy label. A policy named allow-from-openshift-monitoring : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF A policy named allow-same-namespace : USD cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF A policy named allow-from-kube-apiserver-operator : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF For more details, see New kube-apiserver-operator webhook controller validating health of webhook . Optional: To confirm that the network policies exist in your current project, enter the following command: USD oc describe networkpolicy Example output Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress 8.7.5. Creating default network policies for a new project As a cluster administrator, you can modify the new project template to automatically include NetworkPolicy objects when you create a new project. 8.7.6. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestTemplate: name: <template_name> After you save your changes, create a new project to verify that your changes were successfully applied. 8.7.6.1. Adding network policies to the new project template As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project. Prerequisites Your cluster uses a default CNI network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You must log in to the cluster with a user with cluster-admin privileges. You must have created a custom default project template for new projects. Procedure Edit the default template for a new project by running the following command: USD oc edit template <project_template> -n openshift-config Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request . In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects. In the following example, the objects parameter collection includes several NetworkPolicy objects. objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress ... Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: Create a new project: USD oc new-project <project> 1 1 Replace <project> with the name for the project you are creating. Confirm that the network policy objects in the new project template exist in the new project: USD oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s 8.8. Supported configurations The following configurations are supported for the current release of Red Hat OpenShift Service Mesh. 8.8.1. Supported platforms The Red Hat OpenShift Service Mesh Operator supports multiple versions of the ServiceMeshControlPlane resource. Version 2.4 Service Mesh control planes are supported on the following platform versions: Red Hat OpenShift Container Platform version 4.10 or later. Red Hat OpenShift Dedicated version 4. Azure Red Hat OpenShift (ARO) version 4. Red Hat OpenShift Service on AWS (ROSA). 8.8.2. Unsupported configurations Explicitly unsupported cases include: OpenShift Online is not supported for Red Hat OpenShift Service Mesh. Red Hat OpenShift Service Mesh does not support the management of microservices outside the cluster where Service Mesh is running. 8.8.3. Supported network configurations Red Hat OpenShift Service Mesh supports the following network configurations. OpenShift-SDN OVN-Kubernetes is available on all supported versions of OpenShift Container Platform. Third-Party Container Network Interface (CNI) plugins that have been certified on OpenShift Container Platform and passed Service Mesh conformance testing. See Certified OpenShift CNI Plug-ins for more information. 8.8.4. Supported configurations for Service Mesh This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64, IBM Z, and IBM Power. IBM Z is only supported on OpenShift Container Platform 4.10 and later. IBM Power is only supported on OpenShift Container Platform 4.10 and later. Configurations where all Service Mesh components are contained within a single OpenShift Container Platform cluster. Configurations that do not integrate external services such as virtual machines. Red Hat OpenShift Service Mesh does not support EnvoyFilter configuration except where explicitly documented. 8.8.5. Supported configurations for Kiali The Kiali console is only supported on the two most recent releases of the Google Chrome, Microsoft Edge, Mozilla Firefox, or Apple Safari browsers. The openshift authentication strategy is the only supported authentication configuration when Kiali is deployed with Red Hat OpenShift Service Mesh (OSSM). The openshift strategy controls access based on the individual's role-based access control (RBAC) roles of the OpenShift Container Platform. 8.8.6. Supported configurations for Distributed Tracing Jaeger agent as a sidecar is the only supported configuration for Jaeger. Jaeger as a daemonset is not supported for multitenant installations or OpenShift Dedicated. 8.8.7. Supported WebAssembly module 3scale WebAssembly is the only provided WebAssembly module. You can create custom WebAssembly modules. 8.8.8. Operator overview Red Hat OpenShift Service Mesh requires the following four Operators: OpenShift Elasticsearch - (Optional) Provides database storage for tracing and logging with the distributed tracing platform (Jaeger). It is based on the open source Elasticsearch project. Red Hat OpenShift distributed tracing platform (Jaeger) - Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Jaeger project. Kiali Operator provided by Red Hat - Provides observability for your service mesh. You can view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project. Red Hat OpenShift Service Mesh - Allows you to connect, secure, control, and observe the microservices that comprise your applications. The Service Mesh Operator defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source Istio project. steps Install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 8.9. Optimizing routing The OpenShift Container Platform HAProxy router can be scaled or configured to optimize performance. 8.9.1. Baseline Ingress Controller (router) performance The OpenShift Container Platform Ingress Controller, or router, is the ingress point for ingress traffic for applications and services that are configured using routes and ingresses. When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular: HTTP keep-alive/close mode Route type TLS session resumption client support Number of concurrent connections per target route Number of target routes Back end server page size Underlying infrastructure (network/SDN solution, CPU, and so on) While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second. In HTTP keep-alive mode scenarios: Encryption LoadBalancerService HostNetwork none 21515 29622 edge 16743 22913 passthrough 36786 53295 re-encrypt 21583 25198 In HTTP close (no keep-alive) scenarios: Encryption LoadBalancerService HostNetwork none 5719 8273 edge 2729 4069 passthrough 4121 5344 re-encrypt 2320 2941 The default Ingress Controller configuration was used with the spec.tuningOptions.threadCount field set to 4 . Two different endpoint publishing strategies were tested: Load Balancer Service and Host Network. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating a 1 Gbit NIC at page sizes as small as 8 kB. When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router: Number of applications Application type 5-10 static file/web server or caching proxy 100-1000 applications generating dynamic content In general, HAProxy can support routes for up to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content. Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier. 8.10. Postinstallation RHOSP network configuration You can configure some aspects of an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) cluster after installation. 8.10.1. Configuring application access with floating IP addresses After you install OpenShift Container Platform, configure Red Hat OpenStack Platform (RHOSP) to allow application network traffic. Note You do not need to perform this procedure if you provided values for platform.openstack.apiFloatingIP and platform.openstack.ingressFloatingIP in the install-config.yaml file, or os_api_fip and os_ingress_fip in the inventory.yaml playbook, during installation. The floating IP addresses are already set. Prerequisites OpenShift Container Platform cluster must be installed Floating IP addresses are enabled as described in the OpenShift Container Platform on RHOSP installation documentation. Procedure After you install the OpenShift Container Platform cluster, attach a floating IP address to the ingress port: Show the port: USD openstack port show <cluster_name>-<cluster_ID>-ingress-port Attach the port to the IP address: USD openstack floating ip set --port <ingress_port_ID> <apps_FIP> Add a wildcard A record for *apps. to your DNS file: *.apps.<cluster_name>.<base_domain> IN A <apps_FIP> Note If you do not control the DNS server but want to enable application access for non-production purposes, you can add these hostnames to /etc/hosts : <apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain> 8.10.2. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 8.10.3. Adjusting Kuryr ports pool settings in active deployments on RHOSP You can use a custom resource (CR) to configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation on a deployed cluster. Procedure From a command line, open the Cluster Network Operator (CNO) CR for editing: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 1 Set enablePortPoolsPrepopulation to true to make Kuryr create Neutron ports when the first pod that is configured to use the dedicated network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting the value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . Save your changes and quit the text editor to commit your changes. Important Modifying these options on a running cluster forces the kuryr-controller and kuryr-cni pods to restart. As a result, the creation of new pods and services will be delayed. 8.10.4. Enabling OVS hardware offloading For clusters that run on Red Hat OpenStack Platform (RHOSP), you can enable Open vSwitch (OVS) hardware offloading. OVS is a multi-layer virtual switch that enables large-scale, multi-server network virtualization. Prerequisites You installed a cluster on RHOSP that is configured for single-root input/output virtualization (SR-IOV). You installed the SR-IOV Network Operator on your cluster. You created two hw-offload type virtual function (VF) interfaces on your cluster. Note Application layer gateway flows are broken in OpenShift Container Platform version 4.10, 4.11, and 4.12. Also, you cannot offload the application layer gateway flow for OpenShift Container Platform version 4.13. Procedure Create an SriovNetworkNodePolicy policy for the two hw-offload type VF interfaces that are on your cluster: The first virtual function interface apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: "hwoffload9" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: "hwoffload9" 1 Insert the SriovNetworkNodePolicy value here. 2 Both interfaces must include physical function (PF) names. The second virtual function interface apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: "hwoffload10" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens5 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: "hwoffload10" 1 Insert the SriovNetworkNodePolicy value here. 2 Both interfaces must include physical function (PF) names. Create NetworkAttachmentDefinition resources for the two interfaces: A NetworkAttachmentDefinition resource for the first interface apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 name: hwoffload9 namespace: default spec: config: '{ "cniVersion":"0.3.1", "name":"hwoffload9","type":"host-device","device":"ens6" }' A NetworkAttachmentDefinition resource for the second interface apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 name: hwoffload10 namespace: default spec: config: '{ "cniVersion":"0.3.1", "name":"hwoffload10","type":"host-device","device":"ens5" }' Use the interfaces that you created with a pod. For example: A pod that uses the two OVS offload interfaces apiVersion: v1 kind: Pod metadata: name: dpdk-testpmd namespace: default annotations: irq-load-balancing.crio.io: disable cpu-quota.crio.io: disable k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 spec: restartPolicy: Never containers: - name: dpdk-testpmd image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest 8.10.5. Attaching an OVS hardware offloading network You can attach an Open vSwitch (OVS) hardware offloading network to your cluster. Prerequisites Your cluster is installed and running. You provisioned an OVS hardware offloading network on Red Hat OpenStack Platform (RHOSP) to use with your cluster. Procedure Create a file named network.yaml from the following template: spec: additionalNetworks: - name: hwoffload1 namespace: cnf rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "hwoffload1", "type": "host-device","pciBusId": "0000:00:05.0", "ipam": {}}' 1 type: Raw where: pciBusId Specifies the device that is connected to the offloading network. If you do not have it, you can find this value by running the following command: USD oc describe SriovNetworkNodeState -n openshift-sriov-network-operator From a command line, enter the following command to patch your cluster with the file: USD oc apply -f network.yaml
[ "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}", "oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}", "oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: \"30000-<port>\"", "network.config.openshift.io/cluster patched", "oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'", "\"service-node-port-range\":[\"30000-33000\"]", "oc patch networks.operator.openshift.io cluster --type=merge -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"ipsecConfig\":{ }}}}}'", "oc get pods -n openshift-ovn-kubernetes | grep ovnkube-master", "ovnkube-master-4496s 1/1 Running 0 6h39m ovnkube-master-d6cht 1/1 Running 0 6h42m ovnkube-master-skblc 1/1 Running 0 6h51m ovnkube-master-vf8rf 1/1 Running 0 6h51m ovnkube-master-w7hjr 1/1 Running 0 6h51m ovnkube-master-zsk7x 1/1 Running 0 6h42m", "oc -n openshift-ovn-kubernetes -c nbdb rsh ovnkube-master-<XXXXX> ovn-nbctl --no-leader-only get nb_global . ipsec", "true", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" 1 podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: \"\" podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "touch <policy_name>.yaml", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}", "oc apply -f <policy_name>.yaml -n <namespace>", "networkpolicy.networking.k8s.io/default-deny created", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF", "oc describe networkpolicy", "Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s", "openstack port show <cluster_name>-<cluster_ID>-ingress-port", "openstack floating ip set --port <ingress_port_ID> <apps_FIP>", "*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>", "<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain>", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload9\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload9\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload10\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens5 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload10\"", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 name: hwoffload9 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload9\",\"type\":\"host-device\",\"device\":\"ens6\" }'", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 name: hwoffload10 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload10\",\"type\":\"host-device\",\"device\":\"ens5\" }'", "apiVersion: v1 kind: Pod metadata: name: dpdk-testpmd namespace: default annotations: irq-load-balancing.crio.io: disable cpu-quota.crio.io: disable k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 spec: restartPolicy: Never containers: - name: dpdk-testpmd image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest", "spec: additionalNetworks: - name: hwoffload1 namespace: cnf rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"hwoffload1\", \"type\": \"host-device\",\"pciBusId\": \"0000:00:05.0\", \"ipam\": {}}' 1 type: Raw", "oc describe SriovNetworkNodeState -n openshift-sriov-network-operator", "oc apply -f network.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/post-installation_configuration/post-install-network-configuration
10.5.36. LogLevel
10.5.36. LogLevel LogLevel sets how verbose the error messages in the error logs are. LogLevel can be set (from least verbose to most verbose) to emerg , alert , crit , error , warn , notice , info , or debug . The default LogLevel is warn .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-loglevel
Chapter 9. Directory Server RFC support
Chapter 9. Directory Server RFC support Find the list of notable supported LDAP-related RFCs. Note that it is not a complete list of RFCs that Directory Server supports. 9.1. LDAPv3 Features Technical Specification Road Map ( RFC 4510 ) This is a tracking document and does not contain requirements. The Protocol ( RFC 4511 ) Supported with the following exceptions: RFC 4511 Section 4.4.1. Notice of Disconnection : Directory Server terminates the connections in this case. RFC 4511 Section 4.5.1.3. SearchRequest.derefAliases : LDAP aliases are not supported. RFC 4511 Section 4.13. IntermediateResponse Message Directory Information Models ( RFC 4512 ) Supported with the following exceptions: RFC 4512 Section 2.4.2. Structural Object Classes : Directory Server supports entries with multiple structural object classes. RFC 4512 Section 2.6. Alias Entries RFC 4512 Section 4.1.2. Attribute Types : The attribute type COLLECTIVE is not supported. RFC 4512 Section 4.1.4. Matching Rule Uses RFC 4512 Section 4.1.6. DIT Content Rules RFC 4512 Section 4.1.7. DIT Structure Rules and Name Forms RFC 4512 Section 5.1.1. altServer Note that RFC 4512 enables LDAP servers to not support the previously listed exceptions. For further details, see RFC 4512 Section 7.1. Server Guidelines . Authentication Methods and Security Mechanisms ( RFC 4513 ) Supported. String Representation of Distinguished Names ( RFC 4514 ) Supported. String Representation of Search Filters ( RFC 4515 ) Supported. Uniform Resource Locator ( RFC 4516 ) Supported. However, this RFC is mainly focused on LDAP clients. Syntaxes and Matching Rules ( RFC 4517 ) Supported. Exceptions: directoryStringFirstComponentMatch integerFirstComponentMatch objectIdentifierFirstComponentMatch objectIdentifierFirstComponentMatch keywordMatch wordMatch Internationalized String Preparation ( RFC 4518 ) Supported. Schema for User Applications ( RFC 4519 ) Supported. entryUUID Operational Attribute ( RFC 4530 ) Supported. Content Synchronization Operation ( RFC 4533 ) Supported. 9.2. Authentication methods Anonymous SASL Mechanism ( RFC 4505 ) Not supported. Note that RFC 4512 does not require the ANONYMOUS SASL mechanism. However, Directory Server supports LDAP anonymous binds. External SASL Mechanism ( RFC 4422 ) Supported. Plain SASL Mechanism ( RFC 4616 ) Not supported. Note that RFC 4512 does not require the PLAIN SASL mechanism. However, Directory Server supports LDAP anonymous binds. SecurID SASL Mechanism ( RFC 2808 ) Not supported. However if a Cyrus SASL plug-in exists, Directory Server can use it. Kerberos V5 (GSSAPI) SASL Mechanism ( RFC 4752 ) Supported. CRAM-MD5 SASL Mechanism ( RFC 2195 ) Supported. Digest-MD5 SASL Mechanism ( RFC 2831 ) Supported. One-time Password SASL Mechanism ( RFC 2444 ) Not supported. However if a Cyrus SASL plug-in exists, Directory Server can use it. 9.3. X.509 Certificates schema and attributes support LDAP Schema Definitions for X.509 Certificates ( RFC 4523 ) Attribute types and object classes: Supported. Syntaxes: Not supported. Directory Server uses binary and octet syntax. Matching rules: Not supported.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/planning_and_designing_directory_server/assembly_rfc-support_designing-rhds
Hardware accelerators
Hardware accelerators OpenShift Container Platform 4.18 Hardware accelerators Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/hardware_accelerators/index
9.5. Pacemaker Support for Docker Containers (Technology Preview)
9.5. Pacemaker Support for Docker Containers (Technology Preview) Important Pacemaker support for Docker containers is provided for technology preview only. For details on what "technology preview" means, see Technology Preview Features Support Scope . There is one exception to this feature being Technology Preview: As of Red Hat Enterprise Linux 7.4, Red Hat fully supports the usage of Pacemaker bundles for Red Hat Openstack Platform (RHOSP) deployments. Pacemaker supports a special syntax for launching a Docker container with any infrastructure it requires: the bundle . After you have created a Pacemaker bundle, you can create a Pacemaker resource that the bundle encapsulates. Section 9.5.1, "Configuring a Pacemaker Bundle Resource" describes the syntax for the command to create a Pacemaker bundle and provides tables summarizing the parameters you can define for each bundle parameter. Section 9.5.2, "Configuring a Pacemaker Resource in a Bundle" provides information on configuring a resource contained in a Pacemaker bundle. Section 9.5.3, "Limitations of Pacemaker Bundles" notes the limitations of Pacemaker bundles. Section 9.5.4, "Pacemaker Bundle Configuration Example" provides a Pacemaker bundle configuration example. 9.5.1. Configuring a Pacemaker Bundle Resource The syntax for the command to create a Pacemaker bundle for a Docker container is as follows. This command creates a bundle that encapsulates no other resources. For information on creating a cluster resource in a bundle see Section 9.5.2, "Configuring a Pacemaker Resource in a Bundle" . The required bundle_id parameter must be a unique name for the bundle. If the --disabled option is specified, the bundle is not started automatically. If the --wait option is specified, Pacemaker will wait up to n seconds for the bundle to start and then return 0 on success or 1 on error. If n is not specified it defaults to 60 minutes. The following sections describe the parameters you can configure for each element of a Pacemaker bundle. 9.5.1.1. Docker Parameters Table 9.6, "Docker Container Parameters" describes the docker container options you can set for a bundle. Note Before configuring a docker bundle in Pacemaker, you must install Docker and supply a fully configured Docker image on every node allowed to run the bundle. Table 9.6. Docker Container Parameters Field Default Description image Docker image tag (required) replicas Value of promoted-max if that is positive, otherwise 1. A positive integer specifying the number of container instances to launch replicas-per-host 1 A positive integer specifying the number of container instances allowed to run on a single node promoted-max 0 A non-negative integer that, if positive, indicates that the containerized service should be treated as a multistate service, with this many replicas allowed to run the service in the master role network If specified, this will be passed to the docker run command as the network setting for the Docker container. run-command /usr/sbin/pacemaker_remoted if the bundle contains a resource, otherwise none This command will be run inside the container when launching it ("PID 1"). If the bundle contains a resource, this command must start the pacemaker_remoted daemon (but it could, for example, be a script that performs others tasks as well). options Extra command-line options to pass to the docker run command 9.5.1.2. Bundle Network Parameters Table 9.7, "Bundle Resource Network Parameters" describes the network options you can set for a bundle. Table 9.7. Bundle Resource Network Parameters Field Default Description add-host TRUE If TRUE, and ip-range-start is used, Pacemaker will automatically ensure that the /etc/hosts file inside the containers has entries for each replica name and its assigned IP. ip-range-start If specified, Pacemaker will create an implicit ocf:heartbeat:IPaddr2 resource for each container instance, starting with this IP address, using as many sequential addresses as were specified as the replicas parameter for the Docker element. These addresses can be used from the host's network to reach the service inside the container, although it is not visible within the container itself. Only IPv4 addresses are currently supported. host-netmask 32 If ip-range-start is specified, the IP addresses are created with this CIDR netmask (as a number of bits). host-interface If ip-range-start is specified, the IP addresses are created on this host interface (by default, it will be determined from the IP address). control-port 3121 If the bundle contains a Pacemaker resource, the cluster will use this integer TCP port for communication with Pacemaker Remote inside the container. Changing this is useful when the container is unable to listen on the default port, which could happen when the container uses the host's network rather than ip-range-start (in which case replicas-per-host must be 1), or when the bundle may run on a Pacemaker Remote node that is already listening on the default port. Any PCMK_remote_port environment variable set on the host or in the container is ignored for bundle connections. When a Pacemaker bundle configuration uses the control-port parameter, then if the bundle has its own IP address the port needs to be open on that IP address on and from all full cluster nodes running corosync. If, instead, the bundle has set the network="host" container parameter, the port needs to be open on each cluster node's IP address from all cluster nodes. Note Replicas are named by the bundle ID plus a dash and an integer counter starting with zero. For example, if a bundle named httpd-bundle has configured replicas=2 , its containers will be named httpd-bundle-0 and httpd-bundle-1 . In addition to the network parameters, you can optionally specify port-map parameters for a bundle. Table 9.8, "Bundle Resource port-map Parameters" describes these port-map parameters. Table 9.8. Bundle Resource port-map Parameters Field Default Description id A unique name for the port mapping (required) port If this is specified, connections to this TCP port number on the host network (on the container's assigned IP address, if ip-range-start is specified) will be forwarded to the container network. Exactly one of port or range must be specified in a port-mapping. internal-port Value of port If port and internal-port are specified, connections to port on the host's network will be forwarded to this port on the container network. range If range is specified, connections to these TCP port numbers (expressed as first_port-last_port ) on the host network (on the container's assigned IP address, if ip-range-start is specified) will be forwarded to the same ports in the container network. Exactly one of port or range must be specified in a port mapping. Note If the bundle contains a resource, Pacemaker will automatically map the control-port , so it is not necessary to specify that port in a port mapping. 9.5.1.3. Bundle Storage Parameters You can optionally configure storage-map parameters for a bundle. Table 9.9, "Bundle Resource Storage Mapping Parameters" describes these parameters. Table 9.9. Bundle Resource Storage Mapping Parameters Field Default Description id A unique name for the storage mapping (required) source-dir The absolute path on the host's filesystem that will be mapped into the container. Exactly one of source-dir and source-dir-root parameter must be specified when configuring a storage-map parameter. source-dir-root The start of a path on the host's filesystem that will be mapped into the container, using a different subdirectory on the host for each container instance. The subdirectory will be named with the same name as the bundle name, plus a dash and an integer counter starting with 0. Exactly one source-dir and source-dir-root parameter must be specified when configuring a storage-map parameter. target-dir The path name within the container where the host storage will be mapped (required) options File system mount options to use when mapping the storage As an example of how subdirectories on a host are named using the source-dir-root parameter, if source-dir-root=/path/to/my/directory , target-dir=/srv/appdata , and the bundle is named mybundle with replicas=2 , then the cluster will create two container instances with host names mybundle-0 and mybundle-1 and create two directories on the host running the containers: /path/to/my/directory/mybundle-0 and /path/to/my/directory/mybundle-1 . Each container will be given one of those directories, and any application running inside the container will see the directory as /srv/appdata . Note Pacemaker does not define the behavior if the source directory does not already exist on the host. However, it is expected that the container technology or its resource agent will create the source directory in that case. Note If the bundle contains a Pacemaker resource, Pacemaker will automatically map the equivalent of source-dir=/etc/pacemaker/authkey target-dir=/etc/pacemaker/authkey and source-dir-root=/var/log/pacemaker/bundles target-dir=/var/log into the container, so it is not necessary to specify those paths in when configuring storage-map parameters. Important The PCMK_authkey_location environment variable must not be set to anything other than the default of /etc/pacemaker/authkey on any node in the cluster. 9.5.2. Configuring a Pacemaker Resource in a Bundle A bundle may optionally contain one Pacemaker cluster resource. As with a resource that is not contained in a bundle, the cluster resource may have operations, instance attributes, and metadata attributes defined. If a bundle contains a resource, the container image must include the Pacemaker Remote daemon, and ip-range-start or control-port must be configured in the bundle. Pacemaker will create an implicit ocf:pacemaker:remote resource for the connection, launch Pacemaker Remote within the container, and monitor and manage the resource by means of Pacemaker Remote. If the bundle has more than one container instance (replica), the Pacemaker resource will function as an implicit clone, which will be a multistate clone if the bundle has configured the promoted-max option as greater than zero. You create a resource in a Pacemaker bundle with the pcs resource create command by specifying the bundle parameter for the command and the bundle ID in which to include the resource. For an example of creating a Pacemaker bundle that contains a resource, see Section 9.5.4, "Pacemaker Bundle Configuration Example" . Important Containers in bundles that contain a resource must have an accessible networking environment, so that Pacemaker on the cluster nodes can contact Pacemaker Remote inside the container. For example, the docker option --net=none should not be used with a resource. The default (using a distinct network space inside the container) works in combination with the ip-range-start parameter. If the docker option --net=host is used (making the container share the host's network space), a unique control-port parameter should be specified for each bundle. Any firewall must allow access to the control-port . 9.5.2.1. Node Attributes and Bundle Resources If the bundle contains a cluster resource, the resource agent may want to set node attributes such as master scores. However, with containers, it is not apparent which node should get the attribute. If the container uses shared storage that is the same no matter which node the container is hosted on, then it is appropriate to use the master score on the bundle node itself. On the other hand, if the container uses storage exported from the underlying host, then it may be more appropriate to use the master score on the underlying host. Since this depends on the particular situation, the container-attribute-target resource metadata attribute allows the user to specify which approach to use. If it is set to host , then user-defined node attributes will be checked on the underlying host. If it is anything else, the local node (in this case the bundle node) is used. This behavior applies only to user-defined attributes; the cluster will always check the local node for cluster-defined attributes such as #uname . If container-attribute-target is set to host , the cluster will pass additional environment variables to the resource agent that allow it to set node attributes appropriately. 9.5.2.2. Metadata Attributes and Bundle Resources Any metadata attribute set on a bundle will be inherited by the resource contained in a bundle and any resources implicitly created by Pacemaker for the bundle. This includes options such as priority , target-role , and is-managed . 9.5.3. Limitations of Pacemaker Bundles Pacemaker bundles operate with the following limitations: Bundles may not be included in groups or explicitly cloned with a pcs command. This includes a resource that the bundle contains, and any resources implicitly created by Pacemaker for the bundle. Note, however, that if a bundle is configured with a value of replicas greater than one, the bundle behaves as if it were a clone. Restarting Pacemaker while a bundle is unmanaged or the cluster is in maintenance mode may cause the bundle to fail. Bundles do not have instance attributes, utilization attributes, or operations, although a resource contained in a bundle may have them. A bundle that contains a resource can run on a Pacemaker Remote node only if the bundle uses a distinct control-port . 9.5.4. Pacemaker Bundle Configuration Example The following example creates a Pacemaker bundle resource with a bundle ID of httpd-bundle that contains an ocf:heartbeat:apache resource with a resource ID of httpd . This procedure requires the following prerequisite configuration: Docker has been installed and enabled on every node in the cluster. There is an existing Docker image, named pcmktest:http The container image includes the Pacemaker Remote daemon. The container image includes a configured Apache web server. Every node in the cluster has directories /var/local/containers/httpd-bundle-0 , /var/local/containers/httpd-bundle-1 , and /var/local/containers/httpd-bundle-2 , containing an index.html file for the web server root. In production, a single, shared document root would be more likely, but for the example this configuration allows you to make the index.html file on each host different so that you can connect to the web server and verify which index.html file is being served. This procedure configures the following parameters for the Pacemaker bundle: The bundle ID is httpd-bundle . The previously-configured Docker container image is pcmktest:http . This example will launch three container instances. This example will pass the command-line option --log-driver=journald to the docker run command. This parameter is not required, but is included to show how to pass an extra option to the docker command. A value of --log-driver=journald means that the system logs inside the container will be logged in the underlying hosts's systemd journal. Pacemaker will create three sequential implicit ocf:heartbeat:IPaddr2 resources, one for each container image, starting with the IP address 192.168.122.131. The IP addresses are created on the host interface eth0. The IP addresses are created with a CIDR netmask of 24. This example creates a port map ID of http-port ; connections to port 80 on the container's assigned IP address will be forwarded to the container network. This example creates a storage map ID of httpd-root . For this storage mapping: The value of source-dir-root is /var/local/containers , which specifies the start of the path on the host's file system that will be mapped into the container, using a different subdirectory on the host for each container instance. The value of target-dir is /var/www/html , which specifies the path name within the container where the host storage will be mapped. The file system rw mount option will be used when mapping the storage. Since this example container includes a resource, Pacemaker will automatically map the equivalent of source-dir=/etc/pacemaker/authkey in the container, so you do not need to specify that path in the storage mapping. In this example, the existing cluster configuration is put into a temporary file named temp-cib.xml , which is then copied to a file named temp-cib.xml.deltasrc . All modifications to the cluster configuration are made to the tmp-cib.xml file. When the udpates are complete, this procedure uses the diff-against option of the pcs cluster cib-push command so that only the updates to the configuration file are pushed to the active configuration file.
[ "pcs resource bundle create bundle_id container docker [ container_options ] [network network_options ] [port-map port_options ]... [storage-map storage_options ]... [meta meta_options ] [--disabled] [--wait[=n]]", "pcs cluster cib tmp-cib.xml cp tmp-cib.xml tmp-cib.xml.deltasrc pcs -f tmp.cib.xml resource bundle create httpd-bundle container docker image=pcmktest:http replicas=3 options=--log-driver=journald network ip-range-start=192.168.122.131 host-interface=eth0 host-netmask=24 port-map id=httpd-port port=80 storage-map id=httpd-root source-dir-root=/var/local/containers target-dir=/var/www/html options=rw pcs -f tmp-cib.xml resource create httpd ocf:heartbeat:apache statusurl=http://localhost/server-status bundle httpd-bundle pcs cluster cib-push tmp-cib.xml diff-against=tmp-cib.xml.deltasrc" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-containers-haar
3.7. Supported Image Customizations
3.7. Supported Image Customizations A number of image customizations are supported in blueprints. In order to make use of these options, you need to initially configure them in the blueprint and then use the command push to import the modified blueprint to Image Builder. Note These customizations are not currently supported in the accompanying `cockpit-composer` GUI. Set the image host name User specifications for the resulting system image Only the user name is required, you can leave out any other lines. Replace PASSWORD-HASH with the actual password hash. To generate the hash, use a command such as: Important To generate the hash, you must have the python3 package on your system. Use the following command to install the package: Replace PUBLIC-SSH-KEY with the actual public key. Repeat this block for every user you want to include. Group specifications for the resulting system image Repeat this block for every group you want to include. Set an existing user's ssh key Note This option is only applicable for existing users. To create a user and set an ssh key, use the User specifications for the resulting system image customization. Append a kernel boot parameter option to the defaults Set the image host name Add a group for the resulting system image Only the name is required and GID is optional. Set the timezone and the Network Time Protocol (NTP) servers for the resulting system image If you do not set a timezone, the system uses Universal Time, Coordinated (UTC) as default. Setting NTP servers is optional. Set the locale settings for the resulting system image Setting both language and keyboard options is mandatory. You can add multiple languages. The first language you add will be the primary language and the other languages will be secondary. Set the firewall for the resulting system image You can use the numeric ports, or theirs names from the `/etc/services` file to enable or disable lists. Set which services to enable during the boot time You can control which services to enable during the boot time. Some image types already have services enabled or disabled so that the image works correctly and this setup cannot be overridden.
[ "[customizations] hostname = \" baseimage \"", "[[customizations.user]] name = \" USER-NAME \" description = \" USER-DESCRIPTION \" password = \" PASSWORD-HASH \" key = \" PUBLIC-SSH-KEY \" home = /home\" /USER-NAME/ \" shell = \" /usr/bin/bash \" groups = [\"users\", \"wheel\"] uid = NUMBER gid = NUMBER", "python3 -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'", "yum install python3", "[[customizations.group]] name = \" GROUP-NAME \" gid = NUMBER", "[[customizations.sshkey]] user = \" root \" key = \" PUBLIC-SSH-KEY \"", "[[customizations.kernel]] append = \" KERNEL-OPTION \"", "[customizations] hostname = \" BASE-IMAGE \"", "[[customizations.group]] name = \" USER-NAME \" gid = NUMBER", "[customizations.timezone] timezone = \" TIMEZONE \" ntpservers = NTP-SERVER", "[customizations.locale] language = \" [LANGUAGE] \" keyboard = \" KEYBOARD \"", "[customizations.firewall] port = \" [PORTS] \"", "[customizations.services] enabled = \" [SERVICES] \" disabled = \" [SERVICES] \"" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/sect-Documentation-Image_Builder-Test_Chapter3-Test_Section_7
Chapter 48. QuotasPluginKafka schema reference
Chapter 48. QuotasPluginKafka schema reference Used in: KafkaClusterSpec The type property is a discriminator that distinguishes use of the QuotasPluginKafka type from QuotasPluginStrimzi . It must have the value kafka for the type QuotasPluginKafka . Property Property type Description type string Must be kafka . producerByteRate integer The default client quota on the maximum bytes per-second that each client can publish to each broker before it is throttled. Applied on a per-broker basis. consumerByteRate integer The default client quota on the maximum bytes per-second that each client can fetch from each broker before it is throttled. Applied on a per-broker basis. requestPercentage integer The default client quota limits the maximum CPU utilization of each client as a percentage of the network and I/O threads of each broker. Applied on a per-broker basis. controllerMutationRate number The default client quota on the rate at which mutations are accepted per second for create topic requests, create partition requests, and delete topic requests, defined for each broker. The mutations rate is measured by the number of partitions created or deleted. Applied on a per-broker basis.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-QuotasPluginKafka-reference
Logging configuration
Logging configuration Red Hat build of Quarkus 3.15 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/logging_configuration/index
Chapter 14. Installing an IdM client with Kickstart
Chapter 14. Installing an IdM client with Kickstart A Kickstart enrollment automatically adds a new system to the Identity Management (IdM) domain at the time Red Hat Enterprise Linux is installed. 14.1. Installing a client with Kickstart Follow this procedure to use a Kickstart file to install an Identity Management (IdM) client. Prerequisites Do not start the sshd service prior to the kickstart enrollment. Starting sshd before enrolling the client generates the SSH keys automatically, but the Kickstart file in Section 14.2, "Kickstart file for client installation" uses a script for the same purpose, which is the preferred solution. Procedure Pre-create the host entry on the IdM server, and set a temporary password for the entry: The password is used by Kickstart to authenticate during the client installation and expires after the first authentication attempt. After the client is successfully installed, it authenticates using its keytab. Create a Kickstart file with the contents described in Section 14.2, "Kickstart file for client installation" . Make sure that network is configured properly in the Kickstart file using the network command. Use the Kickstart file to install the IdM client. 14.2. Kickstart file for client installation You can use a Kickstart file to install an Identity Management (IdM) client. The contents of the Kickstart file must meet certain requirements as outlined here. The ipa-client package in the list of packages to install Add the ipa-client package to the %packages section of the Kickstart file. For example: Post-installation instructions for the IdM client The post-installation instructions must include: An instruction for ensuring SSH keys are generated before enrollment An instruction to run the ipa-client-install utility, while specifying: All the required information to access and configure the IdM domain services The password which you set when pre-creating the client host on the IdM server. in Section 14.1, "Installing a client with Kickstart" . For example, the post-installation instructions for a Kickstart installation that uses a one-time password and retrieves the required options from the command line rather than via DNS can look like this: Optionally, you can also include other options in the Kickstart file, such as: For a non-interactive installation, add the --unattended option to ipa-client-install . To let the client installation script request a certificate for the machine: Add the --request-cert option to ipa-client-install . Set the system bus address to /dev/null for both the getcert and ipa-client-install utility in the Kickstart chroot environment. To do this, add these lines to the post-installation instructions in the Kickstart file before the ipa-client-install instruction: 14.3. Testing an IdM client The command line informs you that the ipa-client-install was successful, but you can also do your own test. To test that the Identity Management (IdM) client can obtain information about users defined on the server, check that you are able to resolve a user defined on the server. For example, to check the default admin user: To test that authentication works correctly, su to a root user from a non-root user:
[ "ipa host-add client.example.com --password= secret", "%packages ipa-client", "%post --log=/root/ks-post.log Generate SSH keys; ipa-client-install uploads them to the IdM server by default /usr/libexec/openssh/sshd-keygen rsa Run the client install script /usr/sbin/ipa-client-install --hostname= client.example.com --domain= EXAMPLE.COM --enable-dns-updates --mkhomedir -w secret --realm= EXAMPLE.COM --server= server.example.com", "env DBUS_SYSTEM_BUS_ADDRESS=unix:path=/dev/null getcert list env DBUS_SYSTEM_BUS_ADDRESS=unix:path=/dev/null ipa-client-install", "[user@client ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)", "[user@client ~]USD su - Last login: Thu Oct 18 18:39:11 CEST 2018 from 192.168.122.1 on pts/0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/installing-an-ipa-client-with-kickstart_installing-identity-management
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/creating_and_using_execution_environments/providing-feedback
13.8. Locking Down User Logout and User Switching
13.8. Locking Down User Logout and User Switching Follow these steps to prevent the user from logging out. Create the /etc/dconf/profile/user profile which contains the following lines: local is the name of a dconf database. Create the directory /etc/dconf/db/local.d/ if it does not already exist. Create the key file /etc/dconf/db/local.d/00-logout to provide information for the local database: Override the user's setting and prevent the user from changing it in /etc/dconf/db/local.d/locks/lockdown : Update the system databases: Users must log out and back in again before the system-wide settings take effect. Important Users can evade the logout lockdown by switching to a different user, which can thwart system administrator's intentions. That is the reason why it is recommended to disable "user switching" as well to prevent this scenario from occurring. Procedure 13.8. Prevent the User form Switching to a Different User Account Create the /etc/dconf/profile/user profile which contains the following lines: local is the name of a dconf database. Create the directory /etc/dconf/db/local.d/ if it does not already exist. Create the key file /etc/dconf/db/local.d/00-user-switching to provide information for the local database: Override the user's setting and prevent the user from changing it in /etc/dconf/db/local.d/locks/lockdown : Update the system databases: Users must log out and back in again before the system-wide settings take effect.
[ "user-db:user system-db:local", "Prevent the user from user switching disable-log-out=true", "Lock this key to disable user logout /org/gnome/desktop/lockdown/disable-log-out", "dconf update", "user-db:user system-db:local", "Prevent the user from user switching disable-user-switching=true", "Lock this key to disable user switching /org/gnome/desktop/lockdown/disable-user-switching", "dconf update" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/lockdown-logout
16.5. Other Commands
16.5. Other Commands This section describes tools that are simpler equivalents to using guestfish to view and edit guest virtual machine disk images. virt-cat is similar to the guestfish download command. It downloads and displays a single file to the guest virtual machine. For example: virt-edit is similar to the guestfish edit command. It can be used to interactively edit a single file within a guest virtual machine. For example, you may need to edit the grub.conf file in a Linux-based guest virtual machine that will not boot: virt-edit has another mode where it can be used to make simple non-interactive changes to a single file. For this, the -e option is used. This command, for example, changes the root password in a Linux guest virtual machine to having no password: virt-ls is similar to the guestfish ls , ll and find commands. It is used to list a directory or directories (recursively). For example, the following command would recursively list files and directories under /home in a Linux guest virtual machine:
[ "virt-cat RHEL3 /etc/ntp.conf | grep ^server server 127.127.1.0 # local clock", "virt-edit LinuxGuest /boot/grub/grub.conf", "virt-edit LinuxGuest /etc/passwd -e 's/^root:.*?:/root::/'", "virt-ls -R LinuxGuest /home/ | less" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-other-commands
Chapter 10. Scalability and performance optimization
Chapter 10. Scalability and performance optimization 10.1. Optimizing storage Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner. 10.1.1. Available persistent storage options Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment. Table 10.1. Available storage options Storage type Description Examples Block Presented to the operating system (OS) as a block device Suitable for applications that need full control of storage and operate at a low level on files bypassing the file system Also referred to as a Storage Area Network (SAN) Non-shareable, which means that only one client at a time can mount an endpoint of this type AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in the OpenShift Container Platform. File Presented to the OS as a file system export to be mounted Also referred to as Network Attached Storage (NAS) Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales. RHEL NFS, NetApp NFS [1] , and Vendor NFS Object Accessible through a REST API endpoint Configurable for use in the OpenShift image registry Applications must build their drivers into the application and/or container. AWS S3 NetApp NFS supports dynamic PV provisioning when using the Trident plugin. 10.1.2. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 10.2. Recommended and configurable storage technology Storage type Block File Object 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OpenShift Container Platform Logging. You must use one persistent volume type per log store. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. ROX 1 Yes 4 Yes 4 Yes RWX 2 No Yes Yes Registry Configurable Configurable Recommended Scaled registry Not configurable Configurable Recommended Metrics 3 Recommended Configurable 5 Not configurable Elasticsearch Logging Recommended Configurable 6 Not supported 6 Loki Logging Not configurable Not configurable Recommended Apps Recommended Recommended Not configurable 7 Note A scaled registry is an OpenShift image registry where two or more pod replicas are running. 10.1.2.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as a storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations in the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 10.1.2.1.1. Registry In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift image registry cluster deployment with production workloads. 10.1.2.1.2. Scaled registry In a scaled/HA OpenShift image registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. The use of Network File System (NFS) storage with OpenShift Container Platform is supported. However, the use of NFS storage with a scaled registry can cause known issues. For more information, see the Red Hat Knowledgebase solution, Is NFS supported for OpenShift cluster internal components in Production? . 10.1.2.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 10.1.2.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: Loki Operator: The preferred storage technology is S3 compatible Object storage. Block storage is not configurable. OpenShift Elasticsearch Operator: The preferred storage technology is block storage. Object storage is not supported. Note As of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. 10.1.2.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 10.1.2.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . 10.1.3. Data storage management The following table summarizes the main directories that OpenShift Container Platform components write data to. Table 10.3. Main directories for storing OpenShift Container Platform data Directory Notes Sizing Expected growth /var/log Log files for all components. 10 to 30 GB. Log files can grow quickly; size can be managed by growing disks or by using log rotate. /var/lib/etcd Used for etcd storage when storing the database. Less than 20 GB. Database can grow up to 8 GB. Will grow slowly with the environment. Only storing metadata. Additional 20-25 GB for every additional 8 GB of memory. /var/lib/containers This is the mount point for the CRI-O runtime. Storage used for active container runtimes, including pods, and storage of local images. Not used for registry storage. 50 GB for a node with 16 GB memory. Note that this sizing should not be used to determine minimum cluster requirements. Additional 20-25 GB for every additional 8 GB of memory. Growth is limited by capacity for running containers. /var/lib/kubelet Ephemeral volume storage for pods. This includes anything external that is mounted into a container at runtime. Includes environment variables, kube secrets, and data volumes not backed by persistent volumes. Varies Minimal if pods requiring storage are using persistent volumes. If using ephemeral storage, this can grow quickly. 10.1.4. Optimizing storage performance for Microsoft Azure OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. For production Azure clusters and clusters with intensive workloads, the virtual machine operating system disk for control plane machines should be able to sustain a tested and recommended minimum throughput of 5000 IOPS / 200MBps. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure and Azure Stack Hub, disk performance is directly dependent on SSD disk sizes. To achieve the throughput supported by a Standard_D8s_v3 virtual machine, or other similar machine types, and the target of 5000 IOPS, at least a P30 disk is required. Host caching must be set to ReadOnly for low latency and high IOPS and throughput when reading data. Reading data from the cache, which is present either in the VM memory or in the local SSD disk, is much faster than reading from the disk, which is in the blob storage. 10.1.5. Additional resources Configuring the Elasticsearch log store 10.2. Optimizing routing The OpenShift Container Platform HAProxy router can be scaled or configured to optimize performance. 10.2.1. Baseline Ingress Controller (router) performance The OpenShift Container Platform Ingress Controller, or router, is the ingress point for ingress traffic for applications and services that are configured using routes and ingresses. When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular: HTTP keep-alive/close mode Route type TLS session resumption client support Number of concurrent connections per target route Number of target routes Back end server page size Underlying infrastructure (network/SDN solution, CPU, and so on) While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second. In HTTP keep-alive mode scenarios: Encryption LoadBalancerService HostNetwork none 21515 29622 edge 16743 22913 passthrough 36786 53295 re-encrypt 21583 25198 In HTTP close (no keep-alive) scenarios: Encryption LoadBalancerService HostNetwork none 5719 8273 edge 2729 4069 passthrough 4121 5344 re-encrypt 2320 2941 The default Ingress Controller configuration was used with the spec.tuningOptions.threadCount field set to 4 . Two different endpoint publishing strategies were tested: Load Balancer Service and Host Network. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating a 1 Gbit NIC at page sizes as small as 8 kB. When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router: Number of applications Application type 5-10 static file/web server or caching proxy 100-1000 applications generating dynamic content In general, HAProxy can support routes for up to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content. Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier. For more information on Ingress sharding, see Configuring Ingress Controller sharding by using route labels and Configuring Ingress Controller sharding by using namespace labels . You can modify the Ingress Controller deployment by using the information provided in Setting Ingress Controller thread count for threads and Ingress Controller configuration parameters for timeouts, and other tuning configurations in the Ingress Controller specification. 10.2.2. Configuring Ingress Controller liveness, readiness, and startup probes Cluster administrators can configure the timeout values for the kubelet's liveness, readiness, and startup probes for router deployments that are managed by the OpenShift Container Platform Ingress Controller (router). The liveness and readiness probes of the router use the default timeout value of 1 second, which is too brief when networking or runtime performance is severely degraded. Probe timeouts can cause unwanted router restarts that interrupt application connections. The ability to set larger timeout values can reduce the risk of unnecessary and unwanted restarts. You can update the timeoutSeconds value on the livenessProbe , readinessProbe , and startupProbe parameters of the router container. Parameter Description livenessProbe The livenessProbe reports to the kubelet whether a pod is dead and needs to be restarted. readinessProbe The readinessProbe reports whether a pod is healthy or unhealthy. When the readiness probe reports an unhealthy pod, then the kubelet marks the pod as not ready to accept traffic. Subsequently, the endpoints for that pod are marked as not ready, and this status propagates to the kube-proxy. On cloud platforms with a configured load balancer, the kube-proxy communicates to the cloud load-balancer not to send traffic to the node with that pod. startupProbe The startupProbe gives the router pod up to 2 minutes to initialize before the kubelet begins sending the router liveness and readiness probes. This initialization time can prevent routers with many routes or endpoints from prematurely restarting. Important The timeout configuration option is an advanced tuning technique that can be used to work around issues. However, these issues should eventually be diagnosed and possibly a support case or Jira issue opened for any issues that causes probes to time out. The following example demonstrates how you can directly patch the default router deployment to set a 5-second timeout for the liveness and readiness probes: USD oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{"spec":{"template":{"spec":{"containers":[{"name":"router","livenessProbe":{"timeoutSeconds":5},"readinessProbe":{"timeoutSeconds":5}}]}}}}' Verification USD oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness: Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:1936/healthz/ready delay=0s timeout=5s period=10s #success=1 #failure=3 10.2.3. Configuring HAProxy reload interval When you update a route or an endpoint associated with a route, the OpenShift Container Platform router updates the configuration for HAProxy. Then, HAProxy reloads the updated configuration for those changes to take effect. When HAProxy reloads, it generates a new process that handles new connections using the updated configuration. HAProxy keeps the old process running to handle existing connections until those connections are all closed. When old processes have long-lived connections, these processes can accumulate and consume resources. The default minimum HAProxy reload interval is five seconds. You can configure an Ingress Controller using its spec.tuningOptions.reloadInterval field to set a longer minimum reload interval. Warning Setting a large value for the minimum HAProxy reload interval can cause latency in observing updates to routes and their endpoints. To lessen the risk, avoid setting a value larger than the tolerable latency for updates. The maximum value for HAProxy reload interval is 120 seconds. Procedure Change the minimum HAProxy reload interval of the default Ingress Controller to 15 seconds by running the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"tuningOptions":{"reloadInterval":"15s"}}}' 10.3. Optimizing networking The OpenShift SDN uses OpenvSwitch, virtual extensible LAN (VXLAN) tunnels, OpenFlow rules, and iptables. This network can be tuned by using jumbo frames, multi-queue, and ethtool settings. OVN-Kubernetes uses Generic Network Virtualization Encapsulation (Geneve) instead of VXLAN as the tunnel protocol. This network can be tuned by using network interface controller (NIC) offloads. VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to over 16 million, and layer 2 connectivity across physical networks. This allows for all pods behind a service to communicate with each other, even if they are running on different systems. VXLAN encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks. Cloud, VM, and bare metal CPU performance can be capable of handling much more than one Gbps network throughput. When using higher bandwidth links such as 10 or 40 Gbps, reduced performance can occur. This is a known issue in VXLAN-based environments and is not specific to containers or OpenShift Container Platform. Any network that relies on VXLAN tunnels will perform similarly because of the VXLAN implementation. If you are looking to push beyond one Gbps, you can: Evaluate network plugins that implement different routing techniques, such as border gateway protocol (BGP). Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to utilize the full bandwidth of their network infrastructure. VXLAN-offload does not reduce latency. However, CPU utilization is reduced even in latency tests. 10.3.1. Optimizing the MTU for your network There are two important maximum transmission units (MTUs): the network interface controller (NIC) MTU and the cluster network MTU. The NIC MTU is configured at the time of OpenShift Container Platform installation, and you can also change the cluster's MTU as a Day 2 operation. See "Changing cluster network MTU" for more information. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value. The OpenShift SDN network plugin overlay MTU must be less than the NIC MTU by 50 bytes at a minimum. This accounts for the SDN overlay header. So, on a normal ethernet network, this should be set to 1450 . On a jumbo frame ethernet network, this should be set to 8950 . These values should be set automatically by the Cluster Network Operator based on the NIC's configured MTU. Therefore, cluster administrators do not typically update these values. Amazon Web Services (AWS) and bare-metal environments support jumbo frame ethernet networks. This setting will help throughput, especially with transmission control protocol (TCP). For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum. Note This 50 byte overlay header is relevant to the OpenShift SDN network plugin. Other SDN solutions might require the value to be more or less. Additional resources Changing cluster network MTU 10.3.2. Recommended practices for installing large scale clusters When installing large clusters or scaling the cluster to larger node counts, set the cluster network cidr accordingly in your install-config.yaml file before you install the cluster: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 The default cluster network cidr 10.128.0.0/14 cannot be used if the cluster size is more than 500 nodes. It must be set to 10.128.0.0/12 or 10.128.0.0/10 to get to larger node counts beyond 500 nodes. 10.3.3. Impact of IPsec Because encrypting and decrypting node hosts uses CPU power, performance is affected both in throughput and CPU usage on the nodes when encryption is enabled, regardless of the IP security system being used. IPSec encrypts traffic at the IP payload level, before it hits the NIC, protecting fields that would otherwise be used for NIC offloading. This means that some NIC acceleration features might not be usable when IPSec is enabled and will lead to decreased throughput and increased CPU usage. 10.3.4. Additional resources Modifying advanced network configuration parameters Configuration parameters for the OVN-Kubernetes network plugin Configuration parameters for the OpenShift SDN network plugin Improving cluster stability in high latency environments using worker latency profiles 10.4. Optimizing CPU usage with mount namespace encapsulation You can optimize CPU usage in OpenShift Container Platform clusters by using mount namespace encapsulation to provide a private namespace for kubelet and CRI-O processes. This reduces the cluster CPU resources used by systemd with no difference in functionality. Important Mount namespace encapsulation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 10.4.1. Encapsulating mount namespaces Mount namespaces are used to isolate mount points so that processes in different namespaces cannot view each others' files. Encapsulation is the process of moving Kubernetes mount namespaces to an alternative location where they will not be constantly scanned by the host operating system. The host operating system uses systemd to constantly scan all mount namespaces: both the standard Linux mounts and the numerous mounts that Kubernetes uses to operate. The current implementation of kubelet and CRI-O both use the top-level namespace for all container runtime and kubelet mount points. However, encapsulating these container-specific mount points in a private namespace reduces systemd overhead with no difference in functionality. Using a separate mount namespace for both CRI-O and kubelet can encapsulate container-specific mounts from any systemd or other host operating system interaction. This ability to potentially achieve major CPU optimization is now available to all OpenShift Container Platform administrators. Encapsulation can also improve security by storing Kubernetes-specific mount points in a location safe from inspection by unprivileged users. The following diagrams illustrate a Kubernetes installation before and after encapsulation. Both scenarios show example containers which have mount propagation settings of bidirectional, host-to-container, and none. Here we see systemd, host operating system processes, kubelet, and the container runtime sharing a single mount namespace. systemd, host operating system processes, kubelet, and the container runtime each have access to and visibility of all mount points. Container 1, configured with bidirectional mount propagation, can access systemd and host mounts, kubelet and CRI-O mounts. A mount originating in Container 1, such as /run/a is visible to systemd, host operating system processes, kubelet, container runtime, and other containers with host-to-container or bidirectional mount propagation configured (as in Container 2). Container 2, configured with host-to-container mount propagation, can access systemd and host mounts, kubelet and CRI-O mounts. A mount originating in Container 2, such as /run/b , is not visible to any other context. Container 3, configured with no mount propagation, has no visibility of external mount points. A mount originating in Container 3, such as /run/c , is not visible to any other context. The following diagram illustrates the system state after encapsulation. The main systemd process is no longer devoted to unnecessary scanning of Kubernetes-specific mount points. It only monitors systemd-specific and host mount points. The host operating system processes can access only the systemd and host mount points. Using a separate mount namespace for both CRI-O and kubelet completely separates all container-specific mounts away from any systemd or other host operating system interaction whatsoever. The behavior of Container 1 is unchanged, except a mount it creates such as /run/a is no longer visible to systemd or host operating system processes. It is still visible to kubelet, CRI-O, and other containers with host-to-container or bidirectional mount propagation configured (like Container 2). The behavior of Container 2 and Container 3 is unchanged. 10.4.2. Configuring mount namespace encapsulation You can configure mount namespace encapsulation so that a cluster runs with less resource overhead. Note Mount namespace encapsulation is a Technology Preview feature and it is disabled by default. To use it, you must enable the feature manually. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a file called mount_namespace_config.yaml with the following YAML: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-kubens-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-kubens-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service Apply the mount namespace MachineConfig CR by running the following command: USD oc apply -f mount_namespace_config.yaml Example output machineconfig.machineconfiguration.openshift.io/99-kubens-master created machineconfig.machineconfiguration.openshift.io/99-kubens-worker created The MachineConfig CR can take up to 30 minutes to finish being applied in the cluster. You can check the status of the MachineConfig CR by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-03d4bc4befb0f4ed3566a2c8f7636751 False True False 3 0 0 0 45m worker rendered-worker-10577f6ab0117ed1825f8af2ac687ddf False True False 3 1 1 Wait for the MachineConfig CR to be applied successfully across all control plane and worker nodes after running the following command: USD oc wait --for=condition=Updated mcp --all --timeout=30m Example output machineconfigpool.machineconfiguration.openshift.io/master condition met machineconfigpool.machineconfiguration.openshift.io/worker condition met Verification To verify encapsulation for a cluster host, run the following commands: Open a debug shell to the cluster host: USD oc debug node/<node_name> Open a chroot session: sh-4.4# chroot /host Check the systemd mount namespace: sh-4.4# readlink /proc/1/ns/mnt Example output mnt:[4026531953] Check kubelet mount namespace: sh-4.4# readlink /proc/USD(pgrep kubelet)/ns/mnt Example output mnt:[4026531840] Check the CRI-O mount namespace: sh-4.4# readlink /proc/USD(pgrep crio)/ns/mnt Example output mnt:[4026531840] These commands return the mount namespaces associated with systemd, kubelet, and the container runtime. In OpenShift Container Platform, the container runtime is CRI-O. Encapsulation is in effect if systemd is in a different mount namespace to kubelet and CRI-O as in the above example. Encapsulation is not in effect if all three processes are in the same mount namespace. 10.4.3. Inspecting encapsulated namespaces You can inspect Kubernetes-specific mount points in the cluster host operating system for debugging or auditing purposes by using the kubensenter script that is available in Red Hat Enterprise Linux CoreOS (RHCOS). SSH shell sessions to the cluster host are in the default namespace. To inspect Kubernetes-specific mount points in an SSH shell prompt, you need to run the kubensenter script as root. The kubensenter script is aware of the state of the mount encapsulation, and is safe to run even if encapsulation is not enabled. Note oc debug remote shell sessions start inside the Kubernetes namespace by default. You do not need to run kubensenter to inspect mount points when you use oc debug . If the encapsulation feature is not enabled, the kubensenter findmnt and findmnt commands return the same output, regardless of whether they are run in an oc debug session or in an SSH shell prompt. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have configured SSH access to the cluster host. Procedure Open a remote SSH shell to the cluster host. For example: USD ssh core@<node_name> Run commands using the provided kubensenter script as the root user. To run a single command inside the Kubernetes namespace, provide the command and any arguments to the kubensenter script. For example, to run the findmnt command inside the Kubernetes namespace, run the following command: [core@control-plane-1 ~]USD sudo kubensenter findmnt Example output kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt TARGET SOURCE FSTYPE OPTIONS / /dev/sda4[/ostree/deploy/rhcos/deploy/32074f0e8e5ec453e56f5a8a7bc9347eaa4172349ceab9c22b709d9d71a3f4b0.0] | xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota shm tmpfs ... To start a new interactive shell inside the Kubernetes namespace, run the kubensenter script without any arguments: [core@control-plane-1 ~]USD sudo kubensenter Example output kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt 10.4.4. Running additional services in the encapsulated namespace Any monitoring tool that relies on the ability to run in the host operating system and have visibility of mount points created by kubelet, CRI-O, or containers themselves, must enter the container mount namespace to see these mount points. The kubensenter script that is provided with OpenShift Container Platform executes another command inside the Kubernetes mount point and can be used to adapt any existing tools. The kubensenter script is aware of the state of the mount encapsulation feature status, and is safe to run even if encapsulation is not enabled. In that case the script executes the provided command in the default mount namespace. For example, if a systemd service needs to run inside the new Kubernetes mount namespace, edit the service file and use the ExecStart= command line with kubensenter . [Unit] Description=Example service [Service] ExecStart=/usr/bin/kubensenter /path/to/original/command arg1 arg2 10.4.5. Additional resources What are namespaces Manage containers in namespaces by using nsenter MachineConfig
[ "oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"router\",\"livenessProbe\":{\"timeoutSeconds\":5},\"readinessProbe\":{\"timeoutSeconds\":5}}]}}}}'", "oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness: Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:1936/healthz/ready delay=0s timeout=5s period=10s #success=1 #failure=3", "oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"tuningOptions\":{\"reloadInterval\":\"15s\"}}}'", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-kubens-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-kubens-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service", "oc apply -f mount_namespace_config.yaml", "machineconfig.machineconfiguration.openshift.io/99-kubens-master created machineconfig.machineconfiguration.openshift.io/99-kubens-worker created", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-03d4bc4befb0f4ed3566a2c8f7636751 False True False 3 0 0 0 45m worker rendered-worker-10577f6ab0117ed1825f8af2ac687ddf False True False 3 1 1", "oc wait --for=condition=Updated mcp --all --timeout=30m", "machineconfigpool.machineconfiguration.openshift.io/master condition met machineconfigpool.machineconfiguration.openshift.io/worker condition met", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# readlink /proc/1/ns/mnt", "mnt:[4026531953]", "sh-4.4# readlink /proc/USD(pgrep kubelet)/ns/mnt", "mnt:[4026531840]", "sh-4.4# readlink /proc/USD(pgrep crio)/ns/mnt", "mnt:[4026531840]", "ssh core@<node_name>", "[core@control-plane-1 ~]USD sudo kubensenter findmnt", "kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt TARGET SOURCE FSTYPE OPTIONS / /dev/sda4[/ostree/deploy/rhcos/deploy/32074f0e8e5ec453e56f5a8a7bc9347eaa4172349ceab9c22b709d9d71a3f4b0.0] | xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota shm tmpfs", "[core@control-plane-1 ~]USD sudo kubensenter", "kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt", "[Unit] Description=Example service [Service] ExecStart=/usr/bin/kubensenter /path/to/original/command arg1 arg2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/scalability_and_performance/scalability-and-performance-optimization
Chapter 5. Connectivity Link benefits
Chapter 5. Connectivity Link benefits Connectivity Link provides the following main business benefits: User-role oriented Gateway API is composed of API resources that correspond to the organizational roles of infrastructure owner, cluster operator, and application developer. Infrastructure owners and cluster operators are platform engineers who define how shared infrastructure can be used by many different non-coordinating application development teams. Application developers are responsible for creating and managing applications running in a cluster. For example, this includes creating APIs and managing application timeouts, request matching, and path routing to backends. Kubernetes-native Connectivity Link is designed to use Kubernetes-native features for resource efficiency and optimal use. These features can run on any public or private OpenShift cluster, offering multicloud and hybrid-cloud behavior by default. OpenShift is proven to be scalable, resilient, and highly available. Expressive configuration Gateway API resources provide built-in capabilities for header-based matching, traffic weighting, and other capabilities that are only currently possible in existing ingress standards by using custom annotations and custom code. This allows for more intelligent routing, security, and isolation of specific routes without the necessity of writing custom code. Portability Gateway API is an open source standard with many implementations, which is designed by using the concept of flexible conformance. This promotes a highly portable core API that still has flexibility and extensibility to support native capabilities of the environment and implementation. This enables the concepts and core resources to be consistent across implementation and environments, reducing complexity and increasing familiarity. Hybrid cloud and multicloud Connectivity Link includes the flexibility to deploy the same application to any OpenShift cluster hosted on a public or private cloud. This removes a singular dependency or a single point of failure by being tied to a specific cloud provider. For example, if one cloud provider is having network issues, you can switch your deployment and traffic to another cloud provider to minimize the impact on your customers. This provides high availability and disaster recovery and ensures that you are prepared for the unexpected and can establish uninterrupted service, so that your platforms and applications remain resilient. Infrastructure as code You can define your infrastructure by using code to ensure that it is version controlled, tested, and easily replicated. Automated scaling leverages OpenShift auto-scaling features to dynamically adjust resources based on workload demand. This also includes the ability to implement robust monitoring and logging solutions to gain full visibility into your OpenShift clusters. Modular and flexible The highly flexible and modular Connectivity Link architecture enables you to use the technologies and tools that you already have in place, while also allowing you to plug into the connectivity management platform for maximum effectiveness. Figure 5.1. Connectivity Link modular and flexible design
null
https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/introduction_to_connectivity_link/connectivity-link-benefits_rhcl
Chapter 7. Scheduling Windows container workloads
Chapter 7. Scheduling Windows container workloads You can schedule Windows workloads to Windows compute nodes. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a Windows container as the OS image. You have created a Windows compute machine set. 7.1. Windows pod placement Before deploying your Windows workloads to the cluster, you must configure your Windows node scheduling so pods are assigned correctly. Since you have a machine hosting your Windows node, it is managed the same as a Linux-based node. Likewise, scheduling a Windows pod to the appropriate Windows node is completed similarly, using mechanisms like taints, tolerations, and node selectors. With multiple operating systems, and the ability to run multiple Windows OS variants in the same cluster, you must map your Windows pods to a base Windows OS variant by using a RuntimeClass object. For example, if you have multiple Windows nodes running on different Windows Server container versions, the cluster could schedule your Windows pods to an incompatible Windows OS variant. You must have RuntimeClass objects configured for each Windows OS variant on your cluster. Using a RuntimeClass object is also recommended if you have only one Windows OS variant available in your cluster. For more information, see Microsoft's documentation on Host and container version compatibility . Also, it is recommended that you set the spec.os.name.windows parameter in your workload pods. The Windows Machine Config Operator (WMCO) uses this field to authoritatively identify the pod operating system for validation and is used to enforce Windows-specific pod security context constraints (SCCs). Currently, this parameter has no effect on pod scheduling. For more information about this parameter, see the Kubernetes Pods documentation . Important The container base image must be the same Windows OS version and build number that is running on the node where the conainer is to be scheduled. Also, if you upgrade the Windows nodes from one version to another, for example going from 20H2 to 2022, you must upgrade your container base image to match the new version. For more information, see Windows container version compatibility . Additional resources Controlling pod placement using the scheduler Controlling pod placement using node taints Placing pods on specific nodes using node selectors 7.2. Creating a RuntimeClass object to encapsulate scheduling mechanisms Using a RuntimeClass object simplifies the use of scheduling mechanisms like taints and tolerations; you deploy a runtime class that encapsulates your taints and tolerations and then apply it to your pods to schedule them to the appropriate node. Creating a runtime class is also necessary in clusters that support multiple operating system variants. Procedure Create a RuntimeClass object YAML file. For example, runtime-class.yaml : apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: windows2019 1 handler: 'runhcs-wcow-process' scheduling: nodeSelector: 2 kubernetes.io/os: 'windows' kubernetes.io/arch: 'amd64' node.kubernetes.io/windows-build: '10.0.17763' tolerations: 3 - effect: NoSchedule key: os operator: Equal value: "windows" - effect: NoSchedule key: os operator: Equal value: "Windows" 1 Specify the RuntimeClass object name, which is defined in the pods you want to be managed by this runtime class. 2 Specify labels that must be present on nodes that support this runtime class. Pods using this runtime class can only be scheduled to a node matched by this selector. The node selector of the runtime class is merged with the existing node selector of the pod. Any conflicts prevent the pod from being scheduled to the node. For Windows 2019, specify the node.kubernetes.io/windows-build: '10.0.17763' label. For Windows 2022, specify the node.kubernetes.io/windows-build: '10.0.20348' label. 3 Specify tolerations to append to pods, excluding duplicates, running with this runtime class during admission. This combines the set of nodes tolerated by the pod and the runtime class. Create the RuntimeClass object: USD oc create -f <file-name>.yaml For example: USD oc create -f runtime-class.yaml Apply the RuntimeClass object to your pod to ensure it is scheduled to the appropriate operating system variant: apiVersion: v1 kind: Pod metadata: name: my-windows-pod spec: runtimeClassName: windows2019 1 # ... 1 Specify the runtime class to manage the scheduling of your pod. 7.3. Sample Windows container workload deployment You can deploy Windows container workloads to your cluster once you have a Windows compute node available. Note This sample deployment is provided for reference only. Example Service object apiVersion: v1 kind: Service metadata: name: win-webserver labels: app: win-webserver spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 selector: app: win-webserver type: LoadBalancer Example Deployment object apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver name: win-webserver spec: selector: matchLabels: app: win-webserver replicas: 1 template: metadata: labels: app: win-webserver name: win-webserver spec: containers: - name: windowswebserver image: mcr.microsoft.com/windows/servercore:ltsc2019 1 imagePullPolicy: IfNotPresent command: - powershell.exe 2 - -command - USDlistener = New-Object System.Net.HttpListener; USDlistener.Prefixes.Add('http://*:80/'); USDlistener.Start();Write-Host('Listening at http://*:80/'); while (USDlistener.IsListening) { USDcontext = USDlistener.GetContext(); USDresponse = USDcontext.Response; USDcontent='<html><body><H1>Red Hat OpenShift + Windows Container Workloads</H1></body></html>'; USDbuffer = [System.Text.Encoding]::UTF8.GetBytes(USDcontent); USDresponse.ContentLength64 = USDbuffer.Length; USDresponse.OutputStream.Write(USDbuffer, 0, USDbuffer.Length); USDresponse.Close(); }; securityContext: runAsNonRoot: false windowsOptions: runAsUserName: "ContainerAdministrator" os: name: "windows" runtimeClassName: windows2019 3 1 Specify the container image to use: mcr.microsoft.com/powershell:<tag> or mcr.microsoft.com/windows/servercore:<tag> . The container image must match the Windows version running on the node. For Windows 2019, use the ltsc2019 tag. For Windows 2022, use the ltsc2022 tag. 2 Specify the commands to execute on the container. For the mcr.microsoft.com/powershell:<tag> container image, you must define the command as pwsh.exe . For the mcr.microsoft.com/windows/servercore:<tag> container image, you must define the command as powershell.exe . 3 Specify the runtime class you created for the Windows operating system variant on your cluster. 7.4. Support for Windows CSI drivers Red Hat OpenShift support for Windows Containers installs CSI Proxy on all Windows nodes in the cluster. CSI Proxy is a plug-in that enables CSI drivers to perform storage operations on the node. To use persistent storage with Windows workloads, you must deploy a specific Windows CSI driver daemon set, as described in your storage provider's documentation. By default, the WMCO does not automatically create the Windows CSI driver daemon set. See the list of production drivers in the Kubernetes CSI Developer Documentation. Note Red Hat does not provide support for the third-party production drivers listed in the Kubernetes CSI Developer Documentation. 7.5. Scaling a compute machine set manually To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api The compute machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the compute machines that are in the cluster by running the following command: USD oc get machines.machine.openshift.io -n openshift-machine-api Set the annotation on the compute machine that you want to delete by running the following command: USD oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true" Scale the compute machine set by running one of the following commands: USD oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api Or: USD oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the compute machine set up or down. It takes several minutes for the new machines to be available. Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. Verification Verify the deletion of the intended machine by running the following command: USD oc get machines.machine.openshift.io
[ "apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: windows2019 1 handler: 'runhcs-wcow-process' scheduling: nodeSelector: 2 kubernetes.io/os: 'windows' kubernetes.io/arch: 'amd64' node.kubernetes.io/windows-build: '10.0.17763' tolerations: 3 - effect: NoSchedule key: os operator: Equal value: \"windows\" - effect: NoSchedule key: os operator: Equal value: \"Windows\"", "oc create -f <file-name>.yaml", "oc create -f runtime-class.yaml", "apiVersion: v1 kind: Pod metadata: name: my-windows-pod spec: runtimeClassName: windows2019 1", "apiVersion: v1 kind: Service metadata: name: win-webserver labels: app: win-webserver spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 selector: app: win-webserver type: LoadBalancer", "apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver name: win-webserver spec: selector: matchLabels: app: win-webserver replicas: 1 template: metadata: labels: app: win-webserver name: win-webserver spec: containers: - name: windowswebserver image: mcr.microsoft.com/windows/servercore:ltsc2019 1 imagePullPolicy: IfNotPresent command: - powershell.exe 2 - -command - USDlistener = New-Object System.Net.HttpListener; USDlistener.Prefixes.Add('http://*:80/'); USDlistener.Start();Write-Host('Listening at http://*:80/'); while (USDlistener.IsListening) { USDcontext = USDlistener.GetContext(); USDresponse = USDcontext.Response; USDcontent='<html><body><H1>Red Hat OpenShift + Windows Container Workloads</H1></body></html>'; USDbuffer = [System.Text.Encoding]::UTF8.GetBytes(USDcontent); USDresponse.ContentLength64 = USDbuffer.Length; USDresponse.OutputStream.Write(USDbuffer, 0, USDbuffer.Length); USDresponse.Close(); }; securityContext: runAsNonRoot: false windowsOptions: runAsUserName: \"ContainerAdministrator\" os: name: \"windows\" runtimeClassName: windows2019 3", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api", "oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines.machine.openshift.io" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/windows_container_support_for_openshift/scheduling-windows-workloads
8.102. libxml2
8.102. libxml2 8.102.1. RHBA-2013:1737 - libxml2 bug fix update Updated libxml2 packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libxml2 library is a development toolbox providing the implementation of various XML standards. Bug Fix BZ# 863166 Previously, parsing an XML file containing entities loaded via Document Type Definition (DTD) using the XML::LibXML module could lead to a missing entity error as XML::LibXML did not load entities DTD. A patch has been applied to address this problem and XML files are parsed successfully in this scenario. Users of libxml2 are advised to upgrade to these updated packages, which fix this bug. The desktop must be restarted (log out, then log back in) for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/libxml2
Chapter 6. Applying autoscaling to an OpenShift Container Platform cluster
Chapter 6. Applying autoscaling to an OpenShift Container Platform cluster Applying autoscaling to an OpenShift Container Platform cluster involves deploying a cluster autoscaler and then deploying machine autoscalers for each machine type in your cluster. Important You can configure the cluster autoscaler only in clusters where the machine API is operational. 6.1. About the cluster autoscaler The cluster autoscaler adjusts the size of an OpenShift Container Platform cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The cluster autoscaler has a cluster scope, and is not associated with a particular namespace. The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify. The cluster autoscaler computes the total memory, CPU, and GPU on all nodes the cluster, even though it does not manage the control plane nodes. These values are not single-machine oriented. They are an aggregation of all the resources in the entire cluster. For example, if you set the maximum memory resource limit, the cluster autoscaler includes all the nodes in the cluster when calculating the current memory usage. That calculation is then used to determine if the cluster autoscaler has the capacity to add more worker resources. Important Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition that you create is large enough to account for the total possible number of machines in your cluster. This value must encompass the number of control plane machines and the possible number of compute machines that you might scale to. Every 10 seconds, the cluster autoscaler checks which nodes are unnecessary in the cluster and removes them. The cluster autoscaler considers a node for removal if the following conditions apply: The sum of CPU and memory requests of all pods running on the node is less than 50% of the allocated resources on the node. The cluster autoscaler can move all pods running on the node to the other nodes. The cluster autoscaler does not have scale down disabled annotation. If the following types of pods are present on a node, the cluster autoscaler will not remove the node: Pods with restrictive pod disruption budgets (PDBs). Kube-system pods that do not run on the node by default. Kube-system pods that do not have a PDB or have a PDB that is too restrictive. Pods that are not backed by a controller object such as a deployment, replica set, or stateful set. Pods with local storage. Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on. Unless they also have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "true" annotation, pods that have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation. For example, you set the maximum CPU limit to 64 cores and configure the cluster autoscaler to only create machines that have 8 cores each. If your cluster starts with 30 cores, the cluster autoscaler can add up to 4 more nodes with 32 cores, for a total of 62. If you configure the cluster autoscaler, additional usage restrictions apply: Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system pods. Specify requests for your pods. If you have to prevent pods from being deleted too quickly, configure appropriate PDBs. Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure. Do not run additional node group autoscalers, especially the ones offered by your cloud provider. The horizontal pod autoscaler (HPA) and the cluster autoscaler modify cluster resources in different ways. The HPA changes the deployment's or replica set's number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the cluster autoscaler adds resources so that the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the cluster autoscaler deletes the unnecessary nodes. The cluster autoscaler takes pod priorities into account. The Pod Priority and Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the cluster autoscaler ensures that the cluster has resources to run all pods. To honor the intention of both features, the cluster autoscaler includes a priority cutoff function. You can use this cutoff to schedule "best-effort" pods, which do not cause the cluster autoscaler to increase resources but instead run only when spare resources are available. Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the pods, and nodes running these pods might be deleted to free resources. 6.2. About the machine autoscaler The machine autoscaler adjusts the number of Machines in the machine sets that you deploy in an OpenShift Container Platform cluster. You can scale both the default worker machine set and any other machine sets that you create. The machine autoscaler makes more Machines when the cluster runs out of resources to support more deployments. Any changes to the values in MachineAutoscaler resources, such as the minimum or maximum number of instances, are immediately applied to the machine set they target. Important You must deploy a machine autoscaler for the cluster autoscaler to scale your machines. The cluster autoscaler uses the annotations on machine sets that the machine autoscaler sets to determine the resources that it can scale. If you define a cluster autoscaler without also defining machine autoscalers, the cluster autoscaler will never scale your cluster. 6.3. Configuring the cluster autoscaler First, deploy the cluster autoscaler to manage automatic resource scaling in your OpenShift Container Platform cluster. Note Because the cluster autoscaler is scoped to the entire cluster, you can make only one cluster autoscaler for the cluster. 6.3.1. ClusterAutoscaler resource definition This ClusterAutoscaler resource definition shows the parameters and sample values for the cluster autoscaler. apiVersion: "autoscaling.openshift.io/v1" kind: "ClusterAutoscaler" metadata: name: "default" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15 1 Specify the priority that a pod must exceed to cause the cluster autoscaler to deploy additional nodes. Enter a 32-bit integer value. The podPriorityThreshold value is compared to the value of the PriorityClass that you assign to each pod. 2 Specify the maximum number of nodes to deploy. This value is the total number of machines that are deployed in your cluster, not just the ones that the autoscaler controls. Ensure that this value is large enough to account for all of your control plane and compute machines and the total number of replicas that you specify in your MachineAutoscaler resources. 3 Specify the minimum number of cores to deploy in the cluster. 4 Specify the maximum number of cores to deploy in the cluster. 5 Specify the minimum amount of memory, in GiB, in the cluster. 6 Specify the maximum amount of memory, in GiB, in the cluster. 7 Optionally, specify the type of GPU node to deploy. Only nvidia.com/gpu and amd.com/gpu are valid types. 8 Specify the minimum number of GPUs to deploy in the cluster. 9 Specify the maximum number of GPUs to deploy in the cluster. 10 In this section, you can specify the period to wait for each action by using any valid ParseDuration interval, including ns , us , ms , s , m , and h . 11 Specify whether the cluster autoscaler can remove unnecessary nodes. 12 Optionally, specify the period to wait before deleting a node after a node has recently been added . If you do not specify a value, the default value of 10m is used. 13 Specify the period to wait before deleting a node after a node has recently been deleted . If you do not specify a value, the default value of 10s is used. 14 Specify the period to wait before deleting a node after a scale down failure occurred. If you do not specify a value, the default value of 3m is used. 15 Specify the period before an unnecessary node is eligible for deletion. If you do not specify a value, the default value of 10m is used. Note When performing a scaling operation, the cluster autoscaler remains within the ranges set in the ClusterAutoscaler resource definition, such as the minimum and maximum number of cores to deploy or the amount of memory in the cluster. However, the cluster autoscaler does not correct the current values in your cluster to be within those ranges. The minimum and maximum CPUs, memory, and GPU values are determined by calculating those resources on all nodes in the cluster, even if the cluster autoscaler does not manage the nodes. For example, the control plane nodes are considered in the total memory in the cluster, even though the cluster autoscaler does not manage the control plane nodes. 6.3.2. Deploying the cluster autoscaler To deploy the cluster autoscaler, you create an instance of the ClusterAutoscaler resource. Procedure Create a YAML file for the ClusterAutoscaler resource that contains the customized resource definition. Create the resource in the cluster: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the resource file that you customized. 6.4. steps After you configure the cluster autoscaler, you must configure at least one machine autoscaler. 6.5. Configuring the machine autoscalers After you deploy the cluster autoscaler, deploy MachineAutoscaler resources that reference the machine sets that are used to scale the cluster. Important You must deploy at least one MachineAutoscaler resource after you deploy the ClusterAutoscaler resource. Note You must configure separate resources for each machine set. Remember that machine sets are different in each region, so consider whether you want to enable machine scaling in multiple regions. The machine set that you scale must have at least one machine in it. 6.5.1. MachineAutoscaler resource definition This MachineAutoscaler resource definition shows the parameters and sample values for the machine autoscaler. apiVersion: "autoscaling.openshift.io/v1beta1" kind: "MachineAutoscaler" metadata: name: "worker-us-east-1a" 1 namespace: "openshift-machine-api" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6 1 Specify the machine autoscaler name. To make it easier to identify which machine set this machine autoscaler scales, specify or include the name of the machine set to scale. The machine set name takes the following form: <clusterid>-<machineset>-<region> . 2 Specify the minimum number machines of the specified type that must remain in the specified zone after the cluster autoscaler initiates cluster scaling. If running in AWS, GCP, Azure, or RHOSP, this value can be set to 0 . For other providers, do not set this value to 0 . You can save on costs by setting this value to 0 for use cases such as running expensive or limited-usage hardware that is used for specialized workloads, or by scaling a machine set with extra large machines. The cluster autoscaler scales the machine set down to zero if the machines are not in use. Important Do not set the spec.minReplicas value to 0 for the three compute machine sets that are created during the OpenShift Container Platform installation process for an installer provisioned infrastructure. 3 Specify the maximum number machines of the specified type that the cluster autoscaler can deploy in the specified zone after it initiates cluster scaling. Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition is large enough to allow the machine autoscaler to deploy this number of machines. 4 In this section, provide values that describe the existing machine set to scale. 5 The kind parameter value is always MachineSet . 6 The name value must match the name of an existing machine set, as shown in the metadata.name parameter value. 6.5.2. Deploying the machine autoscaler To deploy the machine autoscaler, you create an instance of the MachineAutoscaler resource. Procedure Create a YAML file for the MachineAutoscaler resource that contains the customized resource definition. Create the resource in the cluster: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the resource file that you customized. 6.6. Additional resources For more information about pod priority, see Including pod priority in pod scheduling decisions in OpenShift Container Platform .
[ "apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15", "oc create -f <filename>.yaml 1", "apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6", "oc create -f <filename>.yaml 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/machine_management/applying-autoscaling
Chapter 3. Red Hat build of OpenJDK features
Chapter 3. Red Hat build of OpenJDK features The latest Red Hat build of OpenJDK 11 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 11 releases. Note For all the other changes and security fixes, see OpenJDK 11.0.19 Released . Red Hat build of OpenJDK new features and enhancements Review the following release notes to understand new features and feature enhancements that are included with the Red Hat build of OpenJDK 11.0.19 release: SSLv2Hello and SSLv3 protocols removed from default-enabled TLS protocols SSLv2Hello and SSLv3 are versions of the SSL protocol that are disabled by default, because they have not been considered secure for some time. The SSLv2Hello and SSLv3 protocols are superseded by the more secure and modern TLS protocol and users can switch to TLS versions 1.2 or 1.3. With release Red Hat build of OpenJDK 11.0.19, the list of default-enabled protocols no longer includes SSLv2Hello and SSLv3. Therefore, even if you remove SSLv3 from the jdk.tls.disabledAlgorithms security property, the following methods will no longer return SSLv3: SSLServerSocket.getEnabledProtocols() SSLEngine.getEnabledProtocols() SSLParameters.getProtocols() Now, if you want to enable SSLv3, you must use the jdk.tls.client.protocols or jdk.tls.server.protocols system properties on the command line, or call one of the following methods to enable SSLv3 programmatically: SSLSocket.setEnabledProtocols() SSLServerSocket.setEnabledProtocols() SSLEngine.setEnabledProtocols() See JDK-8190492 (JDK Bug System) . Certigna (Dhimyotis) root certificate authority (CA) certificate added In release Red Hat build of OpenJDK 11.0.19, the cacerts truststore includes the Certigna (Dhimyotis) root certificate: Name: Certigna (Dhimyotis) Alias name: certignarootca Distinguished name: CN=Certigna, O=Dhimyotis, C=FR See JDK-8245654 (JDK Bug System) . listRoots method returns all available drives on Windows In releases, the java.io.File.listRoots() method on Windows systems filtered out any disk drives that were not accessible or did not have media loaded. However, this filtering led to observable performance issues. Now, with release Red Hat build of OpenJDK 11.0.19, the listRoots method returns all available disk drives unfiltered. See JDK-8208077 (JDK Bug System) . Enhanced Swing platform support In earlier releases of Red Hat build of OpenJDK, HTML object tags rendered embedded in Swing HTML components. With release Red Hat build of OpenJDK 11.0.19, rendering only occurs if you set the new system property swing.html.object to true. By default, the swing.html.object property is set to false. JDK bug system reference ID: JDK-8296832.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.19/rn-openjdk11019-features_openjdk
Chapter 1. Support overview
Chapter 1. Support overview Red Hat offers cluster administrators tools for gathering data for your cluster, monitoring, and troubleshooting. 1.1. Get support Get support : Visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources. 1.2. Remote health monitoring issues Remote health monitoring issues : Red Hat OpenShift Service on AWS collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. Red Hat uses this data to understand and resolve issues in connected cluster . Red Hat OpenShift Service on AWS collects data and monitors health using the following: Telemetry : The Telemetry Client gathers and uploads the metrics values to Red Hat every four minutes and thirty seconds. Red Hat uses this data to: Monitor the clusters. Roll out Red Hat OpenShift Service on AWS upgrades. Improve the upgrade experience. Insight Operator : By default, Red Hat OpenShift Service on AWS installs and enables the Insight Operator, which reports configuration and component failure status every two hours. The Insight Operator helps to: Identify potential cluster issues proactively. Provide a solution and preventive action in Red Hat OpenShift Cluster Manager. You can review telemetry information . If you have enabled remote health reporting, Use Insights to identify issues . You can optionally disable remote health reporting. 1.3. Gather data about your cluster Gather data about your cluster : Red Hat recommends gathering your debugging information when opening a support case. This helps Red Hat Support to perform a root cause analysis. A cluster administrator can use the following to gather data about your cluster: The must-gather tool : Use the must-gather tool to collect information about your cluster and to debug the issues. sosreport : Use the sosreport tool to collect configuration details, system information, and diagnostic data for debugging purposes. Cluster ID : Obtain the unique identifier for your cluster, when providing information to Red Hat Support. Cluster node journal logs : Gather journald unit logs and logs within /var/log on individual cluster nodes to troubleshoot node-related issues. A network trace : Provide a network packet trace from a specific Red Hat OpenShift Service on AWS cluster node or a container to Red Hat Support to help troubleshoot network-related issues. 1.4. Troubleshooting issues A cluster administrator can monitor and troubleshoot the following Red Hat OpenShift Service on AWS component issues: Node issues : A cluster administrator can verify and troubleshoot node-related issues by reviewing the status, resource usage, and configuration of a node. You can query the following: Kubelet's status on a node. Cluster node journal logs. Operator issues : A cluster administrator can do the following to resolve Operator issues: Verify Operator subscription status. Check Operator pod health. Gather Operator logs. Pod issues : A cluster administrator can troubleshoot pod-related issues by reviewing the status of a pod and completing the following: Review pod and container logs. Start debug pods with root access. Source-to-image issues : A cluster administrator can observe the S2I stages to determine where in the S2I process a failure occurred. Gather the following to resolve Source-to-Image (S2I) issues: Source-to-Image diagnostic data. Application diagnostic data to investigate application failure. Storage issues : A multi-attach storage error occurs when the mounting volume on a new node is not possible because the failed node cannot unmount the attached volume. A cluster administrator can do the following to resolve multi-attach storage issues: Enable multiple attachments by using RWX volumes. Recover or delete the failed node when using an RWO volume. Monitoring issues : A cluster administrator can follow the procedures on the troubleshooting page for monitoring. If the metrics for your user-defined projects are unavailable or if Prometheus is consuming a lot of disk space, check the following: Investigate why user-defined metrics are unavailable. Determine why Prometheus is consuming a lot of disk space. OpenShift CLI ( oc ) issues : Investigate OpenShift CLI ( oc ) issues by increasing the log level.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/support/support-overview
5.6. SELinux Contexts - Labeling Files
5.6. SELinux Contexts - Labeling Files On systems running SELinux, all processes and files are labeled in a way that represents security-relevant information. This information is called the SELinux context. For files, this is viewed using the ls -Z command: In this example, SELinux provides a user ( unconfined_u ), a role ( object_r ), a type ( user_home_t ), and a level ( s0 ). This information is used to make access control decisions. On DAC systems, access is controlled based on Linux user and group IDs. SELinux policy rules are checked after DAC rules. SELinux policy rules are not used if DAC rules deny access first. Note By default, newly-created files and directories inherit the SELinux type of their parent directories. For example, when creating a new file in the /etc/ directory that is labeled with the etc_t type, the new file inherits the same type: There are multiple commands for managing the SELinux context for files, such as chcon , semanage fcontext , and restorecon . 5.6.1. Temporary Changes: chcon The chcon command changes the SELinux context for files. However, changes made with the chcon command do not survive a file system relabel, or the execution of the restorecon command. SELinux policy controls whether users are able to modify the SELinux context for any given file. When using chcon , users provide all or part of the SELinux context to change. An incorrect file type is a common cause of SELinux denying access. Quick Reference Run the chcon -t type file-name command to change the file type, where type is a type, such as httpd_sys_content_t , and file-name is a file or directory name. Run the chcon -R -t type directory-name command to change the type of the directory and its contents, where type is a type, such as httpd_sys_content_t , and directory-name is a directory name. Procedure 5.5. Changing a File's or Directory's Type The following procedure demonstrates changing the type, and no other attributes of the SELinux context. The example in this section works the same for directories, for example, if file1 was a directory. Run the cd command without arguments to change into your home directory. Run the touch file1 command to create a new file. Use the ls -Z file1 command to view the SELinux context for file1 : In this example, the SELinux context for file1 includes the SELinux unconfined_u user, object_r role, user_home_t type, and the s0 level. For a description of each part of the SELinux context, refer to Chapter 3, SELinux Contexts . Run the chcon -t samba_share_t file1 command to change the type to samba_share_t . The -t option only changes the type. View the change with ls -Z file1 : Use the restorecon -v file1 command to restore the SELinux context for the file1 file. Use the -v option to view what changes: In this example, the type, samba_share_t , is restored to the correct, user_home_t type. When using targeted policy (the default SELinux policy in Red Hat Enterprise Linux 6), the restorecon command reads the files in the /etc/selinux/targeted/contexts/files/ directory, to see which SELinux context files should have. Procedure 5.6. Changing a Directory and its Contents Types The following example demonstrates creating a new directory, and changing the directory's file type (along with its contents) to a type used by the Apache HTTP Server. The configuration in this example is used if you want Apache HTTP Server to use a different document root (instead of /var/www/html/ ): As the Linux root user, run the mkdir /web command to create a new directory, and then the touch /web/file{1,2,3} command to create 3 empty files ( file1 , file2 , and file3 ). The /web/ directory and files in it are labeled with the default_t type: As the Linux root user, run the chcon -R -t httpd_sys_content_t /web/ command to change the type of the /web/ directory (and its contents) to httpd_sys_content_t : As the Linux root user, run the restorecon -R -v /web/ command to restore the default SELinux contexts: Refer to the chcon (1) manual page for further information about chcon . Note Type Enforcement is the main permission control used in SELinux targeted policy. For the most part, SELinux users and roles can be ignored.
[ "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD ls -dZ - /etc/ drwxr-xr-x. root root system_u:object_r: etc_t :s0 /etc", "~]# touch /etc/file1", "~]# ls -lZ /etc/file1 -rw-r--r--. root root unconfined_u:object_r: etc_t :s0 /etc/file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:samba_share_t:s0 file1", "~]USD restorecon -v file1 restorecon reset file1 context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:user_home_t:s0", "~]# ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web ~]# ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3", "~]# chcon -R -t httpd_sys_content_t /web/ ~]# ls -dZ /web/ drwxr-xr-x root root unconfined_u:object_r:httpd_sys_content_t:s0 /web/ ~]# ls -lZ /web/ -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file3", "~]# restorecon -R -v /web/ restorecon reset /web context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file2 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file3 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file1 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-working_with_selinux-selinux_contexts_labeling_files
Scalability and performance
Scalability and performance OpenShift Container Platform 4.16 Scaling your OpenShift Container Platform cluster and tuning performance in production environments Red Hat OpenShift Documentation Team
[ "oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster", "providerSpec: value: instanceType: <compatible_aws_instance_type> 1", "apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring", "oc create -f cluster-monitoring-config.yaml", "sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf", "sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf", "oc debug node/<node_name>", "lsblk", "#!/bin/bash set -uo pipefail for device in <device_type_glob>; do 1 /usr/sbin/blkid \"USD{device}\" &> /dev/null if [ USD? == 2 ]; then echo \"secondary device found USD{device}\" echo \"creating filesystem for etcd mount\" mkfs.xfs -L var-lib-etcd -f \"USD{device}\" &> /dev/null udevadm settle touch /etc/var-lib-etcd-mount exit fi done echo \"Couldn't find secondary block device!\" >&2 exit 77", "base64 -w0 etcd-find-secondary-device.sh", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 storage: files: - path: /etc/find-secondary-device mode: 0755 contents: source: data:text/plain;charset=utf-8;base64,<encoded_etcd_find_secondary_device_script> 1 systemd: units: - name: find-secondary-device.service enabled: true contents: | [Unit] Description=Find secondary device DefaultDependencies=false After=systemd-udev-settle.service Before=local-fs-pre.target ConditionPathExists=!/etc/var-lib-etcd-mount [Service] RemainAfterExit=yes ExecStart=/etc/find-secondary-device RestartForceExitStatus=77 [Install] WantedBy=multi-user.target - name: var-lib-etcd.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-label/var-lib-etcd Where=/var/lib/etcd Type=xfs TimeoutSec=120s [Install] RequiredBy=local-fs.target - name: sync-var-lib-etcd-to-etcd.service enabled: true contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=/usr/sbin/setsebool -P rsync_full_access 1 ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/usr/sbin/semanage fcontext -a -t container_var_lib_t '/var/lib/etcd(/.*)?' ExecStart=/usr/sbin/setsebool -P rsync_full_access 0 TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target - name: restorecon-var-lib-etcd.service enabled: true contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target", "oc debug node/<node_name>", "grep -w \"/var/lib/etcd\" /proc/mounts", "/dev/sdb /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0", "etcd member has been defragmented: <member_name> , memberID: <member_id>", "failed defrag on member: <member_name> , memberID: <member_id> : <error_message>", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "oc describe etcd/cluster | grep \"Control Plane Hardware Speed\"", "Control Plane Hardware Speed: <VALUE>", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"controlPlaneHardwareSpeed\": \"<value>\"}}'", "etcd.operator.openshift.io/cluster patched", "The Etcd \"cluster\" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value: \"Faster\": supported values: \"\", \"Standard\", \"Slower\"", "oc describe etcd/cluster | grep \"Control Plane Hardware Speed\"", "Control Plane Hardware Speed: \"\"", "oc get pods -n openshift-etcd -w", "installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Pending 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 0s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 ContainerCreating 0 1s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 2s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 34s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s installer-9-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Completed 0 36s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 0/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Terminating 0 11m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Pending 0 0s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:1/3 0 1s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 Init:2/3 0 2s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 0/4 PodInitializing 0 3s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 4s etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0 1/1 Running 0 26m etcd-ci-ln-qkgs94t-72292-9clnd-master-0 3/4 Running 0 20s etcd-ci-ln-qkgs94t-72292-9clnd-master-0 4/4 Running 0 20s", "oc describe -n openshift-etcd pod/<ETCD_PODNAME> | grep -e HEARTBEAT_INTERVAL -e ELECTION_TIMEOUT", "oc describe etcd/cluster | grep \"Backend Quota\"", "Backend Quota Gi B: <value>", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": <value>}}'", "etcd.operator.openshift.io/cluster patched", "oc describe etcd/cluster | grep \"Backend Quota\"", "oc get pods -n openshift-etcd", "NAME READY STATUS RESTARTS AGE etcd-ci-ln-b6kfsw2-72292-mzwbq-master-0 4/4 Running 0 39m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-1 4/4 Running 0 37m etcd-ci-ln-b6kfsw2-72292-mzwbq-master-2 4/4 Running 0 41m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-0 1/1 Running 0 51m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-1 1/1 Running 0 49m etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-2 1/1 Running 0 54m installer-5-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 51m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 46m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 44m installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 49m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 40m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 38m installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 43m revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 43m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-0 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-1 0/1 Completed 0 42m revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-2 0/1 Completed 0 42m", "oc describe -n openshift-etcd pod/<etcd_podname> | grep \"ETCD_QUOTA_BACKEND_BYTES\"", "ETCD_QUOTA_BACKEND_BYTES: 8589934592", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": 5}}'", "The Etcd \"cluster\" is invalid: * spec.backendQuotaGiB: Invalid value: 5: spec.backendQuotaGiB in body should be greater than or equal to 8 * spec.backendQuotaGiB: Invalid value: \"integer\": etcd backendQuotaGiB may not be decreased", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": 64}}'", "The Etcd \"cluster\" is invalid: spec.backendQuotaGiB: Invalid value: 64: spec.backendQuotaGiB in body should be less than or equal to 32", "oc describe etcd/cluster | grep \"Backend Quota\"", "Backend Quota Gi B: 10", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"backendQuotaGiB\": 8}}'", "The Etcd \"cluster\" is invalid: spec.backendQuotaGiB: Invalid value: \"integer\": etcd backendQuotaGiB may not be decreased", "query=avg_over_time(pod:container_cpu_usage:sum{namespace=\"openshift-kube-apiserver\"}[30m])", "nodes: - hostName: \"example-node1.example.com\" ironicInspect: \"enabled\"", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: storage-lvmcluster namespace: openshift-storage annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10", "cpuPartitioningMode: AllNodes", "apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: example-config namespace: example-ns spec: additionalImages: - quay.io/foobar/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e - quay.io/foobar/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adf - quay.io/foobar/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfs spaceRequired: 45 GiB 1 overrides: preCacheImage: quay.io/test_images/pre-cache:latest platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable excludePrecachePatterns: 2 - aws - vsphere", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging annotations: {} spec: outputs: USDoutputs pipelines: USDpipelines #apiVersion: \"logging.openshift.io/v1\" #kind: ClusterLogForwarder #metadata: name: instance namespace: openshift-logging #spec: outputs: - type: \"kafka\" name: kafka-open url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit - infrastructure labels: label1: test1 label2: test2 label3: test3 label4: test4 name: all-to-default outputRefs: - kafka-open", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging annotations: {} spec: managementState: \"Managed\" collection: type: \"vector\"", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management", "--- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: targetNamespaces: - openshift-logging", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: channel: \"stable\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle # When setting `stage: Prep`, remember to add the seed image reference object below. # seedImageRef: # image: USDimage # version: USDversion", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: {} spec: channel: \"stable\" name: lifecycle-agent source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management labels: kubernetes.io/metadata.name: openshift-lifecycle-agent", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: {} spec: targetNamespaces: - openshift-lifecycle-agent", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: {} name: example-storage-class provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" annotations: {} spec: logLevel: Normal managementState: Managed storageClassDevices: # The list of storage classes and associated devicePaths need to be specified like this example: - storageClassName: \"example-storage-class\" volumeMode: Filesystem fsType: xfs # The below must be adjusted to the hardware. # For stability and reliability, it's recommended to use persistent # naming conventions for devicePaths, such as /dev/disk/by-path. devicePaths: - /dev/disk/by-path/pci-0000:05:00.0-nvme-1 #--- ## How to verify ## 1. Create a PVC apiVersion: v1 kind: PersistentVolumeClaim metadata: name: local-pvc-name spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi storageClassName: example-storage-class #--- ## 2. Create a pod that mounts it apiVersion: v1 kind: Pod metadata: labels: run: busybox name: busybox spec: containers: - image: quay.io/quay/busybox:latest name: busybox resources: {} command: [\"/bin/sh\", \"-c\", \"sleep infinity\"] volumeMounts: - name: local-pvc mountPath: /data volumes: - name: local-pvc persistentVolumeClaim: claimName: local-pvc-name dnsPolicy: ClusterFirst restartPolicy: Always ## 3. Run the pod on the cluster and verify the size and access of the `/data` mount", "apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage annotations: {} spec: targetNamespaces: - openshift-local-storage", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage annotations: {} spec: channel: \"stable\" name: local-storage-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "This CR verifies the installation/upgrade of the Sriov Network Operator apiVersion: operators.coreos.com/v1 kind: Operator metadata: name: lvms-operator.openshift-storage annotations: {} status: components: refs: - kind: Subscription namespace: openshift-storage conditions: - type: CatalogSourcesUnhealthy status: \"False\" - kind: InstallPlan namespace: openshift-storage conditions: - type: Installed status: \"True\" - kind: ClusterServiceVersion namespace: openshift-storage conditions: - type: Succeeded status: \"True\" reason: InstallSucceeded", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: lvmcluster namespace: openshift-storage annotations: {} spec: {} #example: creating a vg1 volume group leveraging all available disks on the node except the installation disk. storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage annotations: {} spec: channel: \"stable\" name: lvms-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: v1 kind: Namespace metadata: name: openshift-storage labels: workload.openshift.io/allowed: \"management\" openshift.io/cluster-monitoring: \"true\" annotations: {}", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lvms-operator-operatorgroup namespace: openshift-storage annotations: {} spec: targetNamespaces: - openshift-storage", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: {} spec: profile: - name: performance-patch # Please note: # - The 'include' line must match the associated PerformanceProfile name, following below pattern # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from # the [sysctl] section and remove the entire section if it is empty. data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* group.ice-dplls=0:f:10:*:ice-dplls.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"USDmcp\" priority: 19 profile: performance-patch", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"boundary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary-ha\" ptp4lOpts: \" \" phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" haProfiles: \"USDprofile1,USDprofile2\" recommend: - profile: \"boundary-ha\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "The grandmaster profile is provided for testing only It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp annotations: {} spec: profile: - name: \"slave\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s --summary_interval -4\" phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"slave\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: \"\" ptpEventConfig: enableEventPublisher: true transportHost: \"http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary\" ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"boundary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "The grandmaster profile is provided for testing only It is not installed on production clusters In this example two cards USDiface_nic1 and USDiface_nic2 are connected via SMA1 ports by a cable and USDiface_nic2 receives 1PPS signals from USDiface_nic1 apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_nic1 -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_nic1\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"2 1\" # \"USDiface_nic2\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"1 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,300\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_nic1] ts2phc.extts_polarity rising ts2phc.extts_correction 0 [USDiface_nic2] ts2phc.master 0 ts2phc.extts_polarity rising #this is a measured value in nanoseconds to compensate for SMA cable delay ts2phc.extts_correction -10 ptp4lConf: | [USDiface_nic1] masterOnly 1 [USDiface_nic1_1] masterOnly 1 [USDiface_nic1_2] masterOnly 1 [USDiface_nic1_3] masterOnly 1 [USDiface_nic2] masterOnly 1 [USDiface_nic2_1] masterOnly 1 [USDiface_nic2_2] masterOnly 1 [USDiface_nic2_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 1 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary-ha\" ptp4lOpts: \"\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" haProfiles: \"USDprofile1,USDprofile2\" recommend: - profile: \"boundary-ha\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "The grandmaster profile is provided for testing only It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_master\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"0 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,300\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: {} spec: profile: - name: \"ordinary\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"ordinary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "--- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp annotations: {} spec: channel: \"stable\" name: ptp-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp annotations: {} spec: targetNamespaces: - openshift-ptp", "apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators annotations: {}", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators annotations: {} spec: targetNamespaces: - vran-acceleration-operators", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators annotations: {} spec: channel: stable name: sriov-fec source: certified-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: sriovfec.intel.com/v2 kind: SriovFecClusterConfig metadata: name: config namespace: vran-acceleration-operators annotations: {} spec: drainSkip: USDdrainSkip # true if SNO, false by default priority: 1 nodeSelector: node-role.kubernetes.io/master: \"\" acceleratorSelector: pciAddress: USDpciAddress physicalFunction: pfDriver: \"vfio-pci\" vfDriver: \"vfio-pci\" vfAmount: 16 bbDevConfig: USDbbDevConfig #Recommended configuration for Intel ACC100 (Mount Bryce) FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-acc100 #Recommended configuration for Intel N3000 FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-n3000", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"\" namespace: openshift-sriov-network-operator annotations: {} spec: # resourceName: \"\" networkNamespace: openshift-sriov-network-operator vlan: \"\" spoofChk: \"\" ipam: \"\" linkState: \"\" maxTxRate: \"\" minTxRate: \"\" vlanQoS: \"\" trust: \"\" capabilities: \"\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator annotations: {} spec: # The attributes for Mellanox/Intel based NICs as below. # deviceType: netdevice/vfio-pci # isRdma: true/false deviceType: USDdeviceType isRdma: USDisRdma nicSelector: # The exact physical function name must match the hardware used pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" # Injector and OperatorWebhook pods can be disabled (set to \"false\") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the \"requests\"/\"limits\" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: \"1\" # requests: # openshift.io/<resource_name>: \"1\" enableInjector: false enableOperatorWebhook: false logLevel: 0", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" # Injector and OperatorWebhook pods can be disabled (set to \"false\") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the \"requests\"/\"limits\" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: \"1\" # requests: # openshift.io/<resource_name>: \"1\" enableInjector: false enableOperatorWebhook: false # Disable drain is needed for Single Node Openshift disableDrain: true logLevel: 0", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator annotations: {} spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator annotations: {} spec: targetNamespaces: - openshift-sriov-network-operator", "example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.16\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\", \"Ingress\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites: \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot bootMode: \"UEFI\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster annotations: {} spec: disableNetworkDiagnostics: true", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: {} data: config.yaml: | alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h", "Taken from https://github.com/operator-framework/operator-marketplace/blob/53c124a3f0edfd151652e1f23c87dd39ed7646bb/manifests/01_namespace.yaml Update it as the source evolves. apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"\" workload.openshift.io/allowed: \"management\" labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: baseline pod-security.kubernetes.io/enforce-version: v1.25 pod-security.kubernetes.io/audit: baseline pod-security.kubernetes.io/audit-version: v1.25 pod-security.kubernetes.io/warn: baseline pod-security.kubernetes.io/warn-version: v1.25 name: \"openshift-marketplace\"", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: default-cat-source namespace: openshift-marketplace annotations: target.workload.openshift.io/management: '{\"effect\": \"PreferredDuringScheduling\"}' spec: displayName: default-cat-source image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager annotations: {} data: pprof-config.yaml: | disabled: True", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp annotations: {} spec: repositoryDigestMirrors: - USDmirrors", "apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster annotations: {} spec: disableAllDefaultSources: true", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\" containerRuntimeConfig: defaultRuntime: crun", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" containerRuntimeConfig: defaultRuntime: crun", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: container-mount-namespace-and-kubelet-conf-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network-online.target Wants=network-online.target [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network-online.target Wants=network-online.target [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module-worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 08-set-rcu-normal-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 08-set-rcu-normal-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 07-sriov-related-kernel-args-master spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 07-sriov-related-kernel-args-worker spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt", "cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" irq-load-balancing.crio.io: \"disable\"", "cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"performance\"", "mkdir -p ./out", "podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.16 | base64 -d | tar xv -C out", "out/ └── telco-core-rds ├── configuration │ └── reference-crs │ ├── optional │ │ ├── logging │ │ ├── networking │ │ │ └── multus │ │ │ └── tap_cni │ │ ├── other │ │ └── tuning │ └── required │ ├── networking │ │ ├── metallb │ │ ├── multinetworkpolicy │ │ └── sriov │ ├── other │ ├── performance │ ├── scheduling │ └── storage │ └── odf-external └── install", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: autosizing-master spec: autoSizingReserved: true machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\"", "required count: 1 --- apiVersion: v1 kind: Secret metadata: name: rook-ceph-external-cluster-details namespace: openshift-storage type: Opaque data: # encoded content has been made generic external_cluster_details: eyJuYW1lIjoicm9vay1jZXBoLW1vbi1lbmRwb2ludHMiLCJraW5kIjoiQ29uZmlnTWFwIiwiZGF0YSI6eyJkYXRhIjoiY2VwaHVzYTE9MS4yLjMuNDo2Nzg5IiwibWF4TW9uSWQiOiIwIiwibWFwcGluZyI6Int9In19LHsibmFtZSI6InJvb2stY2VwaC1tb24iLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJhZG1pbi1zZWNyZXQiOiJhZG1pbi1zZWNyZXQiLCJmc2lkIjoiMTExMTExMTEtMTExMS0xMTExLTExMTEtMTExMTExMTExMTExIiwibW9uLXNlY3JldCI6Im1vbi1zZWNyZXQifX0seyJuYW1lIjoicm9vay1jZXBoLW9wZXJhdG9yLWNyZWRzIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsidXNlcklEIjoiY2xpZW50LmhlYWx0aGNoZWNrZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoibW9uaXRvcmluZy1lbmRwb2ludCIsImtpbmQiOiJDZXBoQ2x1c3RlciIsImRhdGEiOnsiTW9uaXRvcmluZ0VuZHBvaW50IjoiMS4yLjMuNCwxLjIuMy4zLDEuMi4zLjIiLCJNb25pdG9yaW5nUG9ydCI6IjkyODMifX0seyJuYW1lIjoiY2VwaC1yYmQiLCJraW5kIjoiU3RvcmFnZUNsYXNzIiwiZGF0YSI6eyJwb29sIjoib2RmX3Bvb2wifX0seyJuYW1lIjoicm9vay1jc2ktcmJkLW5vZGUiLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJ1c2VySUQiOiJjc2ktcmJkLW5vZGUiLCJ1c2VyS2V5IjoiIn19LHsibmFtZSI6InJvb2stY3NpLXJiZC1wcm92aXNpb25lciIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7InVzZXJJRCI6ImNzaS1yYmQtcHJvdmlzaW9uZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoicm9vay1jc2ktY2VwaGZzLXByb3Zpc2lvbmVyIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsiYWRtaW5JRCI6ImNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiLCJhZG1pbktleSI6IiJ9fSx7Im5hbWUiOiJyb29rLWNzaS1jZXBoZnMtbm9kZSIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7ImFkbWluSUQiOiJjc2ktY2VwaGZzLW5vZGUiLCJhZG1pbktleSI6ImMyVmpjbVYwIn19LHsibmFtZSI6ImNlcGhmcyIsImtpbmQiOiJTdG9yYWdlQ2xhc3MiLCJkYXRhIjp7ImZzTmFtZSI6ImNlcGhmcyIsInBvb2wiOiJtYW5pbGFfZGF0YSJ9fQ==", "required count: 1 --- apiVersion: ocs.openshift.io/v1 kind: StorageCluster metadata: name: ocs-external-storagecluster namespace: openshift-storage spec: externalStorage: enable: true labelSelector: {} status: phase: Ready", "required: yes count: 1 --- apiVersion: v1 kind: Namespace metadata: name: openshift-storage annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"", "required: yes count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage", "required: yes count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: odf-operator namespace: openshift-storage spec: channel: \"stable-4.14\" name: odf-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown", "required count: 1 apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: gatewayConfig: routingViaHost: true # additional networks are optional and may alternatively be specified using NetworkAttachmentDefinition CRs additionalNetworks: [USDadditionalNetworks] # eg #- name: add-net-1 # namespace: app-ns-1 # rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"add-net-1\", \"plugins\": [{\"type\": \"macvlan\", \"master\": \"bond1\", \"ipam\": {}}] }' # type: Raw #- name: add-net-2 # namespace: app-ns-1 # rawCNIConfig: '{ \"cniVersion\": \"0.4.0\", \"name\": \"add-net-2\", \"plugins\": [ {\"type\": \"macvlan\", \"master\": \"bond1\", \"mode\": \"private\" },{ \"type\": \"tuning\", \"name\": \"tuning-arp\" }] }' # type: Raw # Enable to use MultiNetworkPolicy CRs useMultiNetworkPolicy: true", "optional copies: 0-N apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: USDname namespace: USDns spec: nodeSelector: kubernetes.io/hostname: USDnodeName config: USDconfig #eg #config: '{ # \"cniVersion\": \"0.3.1\", # \"name\": \"external-169\", # \"type\": \"vlan\", # \"master\": \"ens8f0\", # \"mode\": \"bridge\", # \"vlanid\": 169, # \"ipam\": { # \"type\": \"static\", # } #}'", "required count: 1-N apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: USDname # eg addresspool3 namespace: metallb-system annotations: metallb.universe.tf/address-pool: USDname # eg addresspool3 spec: ############## # Expected variation in this configuration addresses: [USDpools] #- 3.3.3.0/24 autoAssign: true ##############", "required count: 1-N apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: bfdprofile namespace: metallb-system spec: ################ # These values may vary. Recommended values are included as default receiveInterval: 150 # default 300ms transmitInterval: 150 # default 300ms #echoInterval: 300 # default 50ms detectMultiplier: 10 # default 3 echoMode: true passiveMode: true minimumTtl: 5 # default 254 # ################", "required count: 1-N apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: USDname # eg bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: [USDpool] # eg: # - addresspool3 peers: [USDpeers] # eg: # - peer-one # communities: [USDcommunities] # Note correlation with address pool, or Community # eg: # - bgpcommunity # - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100", "required count: 1-N apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: USDname namespace: metallb-system spec: peerAddress: USDip # eg 192.168.1.2 peerASN: USDpeerasn # eg 64501 myASN: USDmyasn # eg 64500 routerID: USDid # eg 10.10.10.10 bfdProfile: bfdprofile passwordSecret: {}", "--- apiVersion: metallb.io/v1beta1 kind: Community metadata: name: bgpcommunity namespace: metallb-system spec: communities: [USDcomm]", "required count: 1 apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: node-role.kubernetes.io/worker: \"\"", "required: yes count: 1 --- apiVersion: v1 kind: Namespace metadata: name: metallb-system annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"", "required: yes count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system", "required: yes count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux boolean for tap cni plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service", "apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate spec: {}", "apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate annotations: workload.openshift.io/allowed: management", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: \"stable\" name: kubernetes-nmstate-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown", "optional (though expected for all) count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: USDname # eg sriov-network-abcd namespace: openshift-sriov-network-operator spec: capabilities: \"USDcapabilities\" # eg '{\"mac\": true, \"ips\": true}' ipam: \"USDipam\" # eg '{ \"type\": \"host-local\", \"subnet\": \"10.3.38.0/24\" }' networkNamespace: USDnns # eg cni-test resourceName: USDresource # eg resourceTest", "optional (though expected in all deployments) count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator spec: {} # USDspec eg #deviceType: netdevice #nicSelector: deviceID: \"1593\" pfNames: - ens8f0np0#0-9 rootDevices: - 0000:d8:00.0 vendor: \"8086\" #nodeSelector: kubernetes.io/hostname: host.sample.lab #numVfs: 20 #priority: 99 #excludeTopology: true #resourceName: resourceNameABCD", "required count: 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: \"\" enableInjector: true enableOperatorWebhook: true disableDrain: false logLevel: 2", "required: yes count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown", "required: yes count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management", "required: yes count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator", "Optional count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: # Periodic is the default setting infoRefreshMode: Periodic machineConfigPoolSelector: matchLabels: # This label must match the pool(s) you want to run NUMA-aligned workloads pools.operator.machineconfiguration.openshift.io/worker: \"\"", "required count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"4.14\" name: numaresources-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace", "required: yes count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources annotations: workload.openshift.io/allowed: management", "required: yes count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources", "Optional count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: #cacheResyncPeriod: \"0\" # Image spec should be the latest for the release imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.14.0\" #logLevel: \"Trace\" schedulerName: topo-aware-scheduler", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: # non-schedulable control plane is the default. This ensures # compliance. mastersSchedulable: false policy: name: \"\"", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 40-load-kernel-modules-control-plane spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,c2N0cA== filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 40-load-kernel-modules-worker spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf", "required count: 1 apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - type: \"kafka\" name: kafka-open url: tcp://10.11.12.13:9092/test pipelines: - inputRefs: - infrastructure #- application - audit labels: label1: test1 label2: test2 label3: test3 label4: test4 label5: test5 name: all-to-default outputRefs: - kafka-open", "required count: 1 apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: vector managementState: Managed", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management", "--- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: \"stable\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown", "required count: 1..N apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-operators-disconnected namespace: openshift-marketplace spec: displayName: Red Hat Disconnected Operators Catalog image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "required count: 1 apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp spec: repositoryDigestMirrors: [] - USDmirrors", "required count: 1 apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true", "optional count: 1 --- apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 15d volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 100Gi alertmanagerMain: volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 20Gi", "required count: 1 apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: # Some pods want the kernel stack to ignore IPv6 router Advertisement. kubeletconfig.experimental: | {\"allowedUnsafeSysctls\":[\"net.ipv6.conf.all.accept_ra\"]} spec: cpu: # node0 CPUs: 0-17,36-53 # node1 CPUs: 18-34,54-71 # siblings: (0,36), (1,37) # we want to reserve the first Core of each NUMA socket # # no CPU left behind! all-cpus == isolated + reserved isolated: USDisolated # eg 1-17,19-35,37-53,55-71 reserved: USDreserved # eg 0,18,36,54 # Guaranteed QoS pods will disable IRQ balancing for cores allocated to the pod. # default value of globallyDisableIrqLoadBalancing is false globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: # 32GB per numa node - count: USDcount # eg 64 size: 1G machineConfigPoolSelector: # For SNO: machineconfiguration.openshift.io/role: 'master' pools.operator.machineconfiguration.openshift.io/worker: '' nodeSelector: # For SNO: node-role.kubernetes.io/master: \"\" node-role.kubernetes.io/worker: \"\" workloadHints: realTime: false highPowerConsumption: false perPodPowerManagement: true realTimeKernel: enabled: false numa: # All guaranteed QoS containers get resources from a single NUMA node topologyPolicy: \"single-numa-node\" net: userLevelNetworking: false", "required pods per cluster / pods per node = total number of nodes needed", "2200 / 500 = 4.4", "2200 / 20 = 110", "required pods per cluster / total number of nodes = expected pods per node", "--- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: \"USD{IMAGE}\" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR2_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR3_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR4_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 clusterIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: \"[A-Za-z0-9]{255}\" required: false labels: template: deployment-config-template", "oc create quota <name> --hard=count/<resource>.<group>=<quota> 1", "oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'", "openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu: 0 0", "cat gpu-quota.yaml", "apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1", "oc create -f gpu-quota.yaml", "resourcequota/gpu-quota created", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1", "oc create pod gpu-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: gpu-pod-s46h7 namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1", "oc get pods", "NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1", "oc create -f gpu-pod.yaml", "Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1", "apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5", "apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 requests.ephemeral-storage: 2Gi 4 limits.cpu: \"2\" 5 limits.memory: 2Gi 6 limits.ephemeral-storage: 4Gi 7", "apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 limits.ephemeral-storage: \"4Gi\" 4 scopes: - NotTerminating 5", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 limits.ephemeral-storage: \"1Gi\" 4 scopes: - Terminating 5", "apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7", "oc create -f <resource_quota_definition> [-n <project_name>]", "oc create -f core-object-counts.yaml -n demoproject", "oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota>", "oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 resourcequota \"test\" created oc describe quota test Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4", "oc get quota -n demoproject NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m", "oc describe quota core-object-counts -n demoproject Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10", "kubernetesMasterConfig: apiLevels: - v1beta3 - v1 apiServerArguments: null controllerArguments: resource-quota-sync-period: - \"10s\"", "master-restart api master-restart controllers", "admissionConfig: pluginConfig: ResourceQuota: configuration: apiVersion: resourcequota.admission.k8s.io/v1alpha1 kind: Configuration limitedResources: - resource: persistentvolumeclaims 1 matchContains: - gold.storageclass.storage.k8s.io/requests.storage 2", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"core-resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 - type: \"Container\" max: cpu: \"2\" 6 memory: \"1Gi\" 7 min: cpu: \"100m\" 8 memory: \"4Mi\" 9 default: cpu: \"300m\" 10 memory: \"200Mi\" 11 defaultRequest: cpu: \"200m\" 12 memory: \"100Mi\" 13 maxLimitRequestRatio: cpu: \"10\" 14", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"openshift-resource-limits\" spec: limits: - type: openshift.io/Image max: storage: 1Gi 1 - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 - type: \"Pod\" max: cpu: \"2\" 4 memory: \"1Gi\" 5 ephemeral-storage: \"1Gi\" 6 min: cpu: \"1\" 7 memory: \"1Gi\" 8", "{ \"apiVersion\": \"v1\", \"kind\": \"LimitRange\", \"metadata\": { \"name\": \"pvcs\" 1 }, \"spec\": { \"limits\": [{ \"type\": \"PersistentVolumeClaim\", \"min\": { \"storage\": \"2Gi\" 2 }, \"max\": { \"storage\": \"50Gi\" 3 } } ] } }", "oc create -f <limit_range_file> -n <project>", "oc get limits -n demoproject", "NAME AGE resource-limits 6d", "oc describe limits resource-limits -n demoproject", "Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - -", "oc delete limits <limit_name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-enable-rfs spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;charset=US-ASCII,%23%20turn%20on%20Receive%20Flow%20Steering%20%28RFS%29%20for%20all%20network%20interfaces%0ASUBSYSTEM%3D%3D%22net%22%2C%20ACTION%3D%3D%22add%22%2C%20RUN%7Bprogram%7D%2B%3D%22/bin/bash%20-c%20%27for%20x%20in%20/sys/%24DEVPATH/queues/rx-%2A%3B%20do%20echo%208192%20%3E%20%24x/rps_flow_cnt%3B%20%20done%27%22%0A filesystem: root mode: 0644 path: /etc/udev/rules.d/70-persistent-net.rules - contents: source: data:text/plain;charset=US-ASCII,%23%20define%20sock%20flow%20enbtried%20for%20%20Receive%20Flow%20Steering%20%28RFS%29%0Anet.core.rps_sock_flow_entries%3D8192%0A filesystem: root mode: 0644 path: /etc/sysctl.d/95-enable-rps.conf", "oc create -f enable-rfs.yaml", "oc get mc", "oc delete mc 50-enable-rfs", "cat 05-master-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-master-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805", "cat 05-worker-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805", "oc create -f 05-master-kernelarg-hpav.yaml", "oc create -f 05-worker-kernelarg-hpav.yaml", "oc delete -f 05-master-kernelarg-hpav.yaml", "oc delete -f 05-worker-kernelarg-hpav.yaml", "<domain> <iothreads>3</iothreads> 1 <devices> <disk type=\"block\" device=\"disk\"> 2 <driver ... iothread=\"2\"/> </disk> </devices> </domain>", "<disk type=\"block\" device=\"disk\"> <driver name=\"qemu\" type=\"raw\" cache=\"none\" io=\"native\" iothread=\"1\"/> </disk>", "<memballoon model=\"none\"/>", "sysctl kernel.sched_migration_cost_ns=60000", "kernel.sched_migration_cost_ns=60000", "cgroup_controllers = [ \"cpu\", \"devices\", \"memory\", \"blkio\", \"cpuacct\" ]", "systemctl restart libvirtd", "echo 0 > /sys/module/kvm/parameters/halt_poll_ns", "echo 80000 > /sys/module/kvm/parameters/halt_poll_ns", "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "oc get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME TUNED APPLIED DEGRADED AGE master-0 openshift-control-plane True False 6h33m master-1 openshift-control-plane True False 6h33m master-2 openshift-control-plane True False 6h33m worker-a openshift-node True False 6h28m worker-b openshift-node True False 6h28m", "oc get co/node-tuning -n openshift-cluster-node-tuning-operator", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE node-tuning 4.16.1 True False True 60m 1/5 Profiles with bootcmdline conflict", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: ingress namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=A custom OpenShift ingress profile include=openshift-control-plane [sysctl] net.ipv4.ip_local_port_range=\"1024 65535\" net.ipv4.tcp_tw_reuse=1 name: openshift-ingress recommend: - match: - label: tuned.openshift.io/ingress-node-label priority: 10 profile: openshift-ingress", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/ -name tuned.conf -printf '%h\\n' | sed 's|^.*/||'", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-hpc-compute namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile for HPC compute workloads include=openshift-node,hpc-compute name: openshift-node-hpc-compute recommend: - match: - label: tuned.openshift.io/openshift-node-hpc-compute priority: 20 profile: openshift-node-hpc-compute", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-no-reapply-sysctl namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.max_map_count=>524288 name: openshift-no-reapply-sysctl recommend: - match: - label: tuned.openshift.io/openshift-no-reapply-sysctl priority: 15 profile: openshift-no-reapply-sysctl operand: tunedConfig: reapply_sysctl: false", "apiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio=\"55\" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profile", "oc --kubeconfig=\"USDMGMT_KUBECONFIG\" create -f tuned-1.yaml", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: tuningConfig: - name: tuned-1 status:", "oc --kubeconfig=\"USDHC_KUBECONFIG\" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME AGE default 7m36s rendered 7m36s tuned-1 65s", "oc --kubeconfig=\"USDHC_KUBECONFIG\" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14s", "oc --kubeconfig=\"USDHC_KUBECONFIG\" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio", "vm.dirty_ratio = 55", "apiVersion: v1 kind: ConfigMap metadata: name: tuned-hugepages namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 name: openshift-node-hugepages recommend: - priority: 20 profile: openshift-node-hugepages", "oc --kubeconfig=\"<management_cluster_kubeconfig>\" create -f tuned-hugepages.yaml 1", "hcp create nodepool aws --cluster-name <hosted_cluster_name> \\ 1 --name <nodepool_name> \\ 2 --node-count <nodepool_replicas> \\ 3 --instance-type <instance_type> \\ 4 --render > hugepages-nodepool.yaml", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: hugepages-nodepool namespace: clusters spec: management: upgradeType: InPlace tuningConfig: - name: tuned-hugepages", "oc --kubeconfig=\"<management_cluster_kubeconfig>\" create -f hugepages-nodepool.yaml", "oc --kubeconfig=\"<hosted_cluster_kubeconfig>\" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME AGE default 123m hugepages-8dfb1fed 1m23s rendered 123m", "oc --kubeconfig=\"<hosted_cluster_kubeconfig>\" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 openshift-node True False 132m nodepool-1-worker-2 openshift-node True False 131m hugepages-nodepool-worker-1 openshift-node-hugepages True False 4m8s hugepages-nodepool-worker-2 openshift-node-hugepages True False 3m57s", "oc --kubeconfig=\"<hosted_cluster_kubeconfig>\" debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline", "BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-... hugepagesz=2M hugepages=50", "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc new-project <project_name>", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2", "NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m", "oc debug node/perf-node.example.com", "sh-4.2# systemctl status | grep -B5 pause", "├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause", "cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope", "for i in `ls cpuset.cpus cgroup.procs` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus", "oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s", "oc edit KubeletConfig cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2", "spec: containers: - name: nginx image: nginx", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"", "apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources", "oc create -f nro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources", "oc create -f nro-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"4.16\" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f nro-sub.yaml", "oc get csv -n openshift-numaresources", "NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.16.2 numaresources-operator 4.16.2 Succeeded", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1", "oc create -f nrop.yaml", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: logLevel: Normal nodeGroups: - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-ht - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-cnf - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-other", "oc get numaresourcesoperators.nodetopology.openshift.io", "NAME AGE numaresourcesoperator 27s", "oc get all -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.16\" 1", "oc create -f nro-scheduler.yaml", "oc get all -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s pod/secondary-scheduler-847cb74f84-9whlm 1/1 Running 0 10m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 3 3 3 3 3 node-role.kubernetes.io/worker= 98s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 12m deployment.apps/secondary-scheduler 1/1 1 1 10m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7d9d84c58d 1 1 1 12m replicaset.apps/secondary-scheduler-847cb74f84 1 1 1 10m", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"3\" reserved: 0-2 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 nodeSelector: node-role.kubernetes.io/worker: \"\" numa: topologyPolicy: single-numa-node 2 realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-tuning spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 kubeletConfig: cpuManagerPolicy: \"static\" 2 cpuManagerReconcilePeriod: \"5s\" reservedSystemCPUs: \"0,1\" 3 memoryManagerPolicy: \"Static\" 4 evictionHard: memory.available: \"100Mi\" kubeReserved: memory: \"512Mi\" reservedMemory: - numaNode: 0 limits: memory: \"1124Mi\" systemReserved: memory: \"512Mi\" topologyManagerPolicy: \"single-numa-node\" 5", "oc create -f nro-kubeletconfig.yaml", "oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'", "\"topo-aware-scheduler\"", "apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: \"100Mi\" cpu: \"10\" requests: memory: \"100Mi\" cpu: \"10\" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: [\"/bin/sh\", \"-c\"] args: [ \"while true; do sleep 1h; done;\" ] resources: limits: memory: \"100Mi\" cpu: \"8\" requests: memory: \"100Mi\" cpu: \"8\"", "oc create -f nro-deployment.yaml", "oc get pods -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE numa-deployment-1-6c4f5bdb84-wgn6g 2/2 Running 0 5m2s numaresources-controller-manager-7d9d84c58d-4v65j 1/1 Running 0 18m numaresourcesoperator-worker-7d96r 2/2 Running 4 43m numaresourcesoperator-worker-crsht 2/2 Running 2 43m numaresourcesoperator-worker-jp9mw 2/2 Running 2 43m secondary-scheduler-847cb74f84-fpncj 1/1 Running 0 18m", "oc describe pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m45s topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-6c4f5bdb84-wgn6g to worker-1", "oc get pods -n openshift-numaresources -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES numa-deployment-1-6c4f5bdb84-wgn6g 0/2 Running 0 82m 10.128.2.50 worker-1 <none> <none>", "oc describe noderesourcetopologies.topology.node.k8s.io worker-1", "Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node", "oc get pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources -o jsonpath=\"{ .status.qosClass }\"", "Guaranteed", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: infoRefreshMode: Periodic 1 infoRefreshPeriod: 10s 2 podsFingerprinting: Enabled 3 name: worker", "oc get numaresop numaresourcesoperator -o json | jq '.status'", "\"config\": { \"infoRefreshMode\": \"Periodic\", \"infoRefreshPeriod\": \"10s\", \"podsFingerprinting\": \"Enabled\" }, \"name\": \"worker\"", "oc get crd | grep noderesourcetopologies", "NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z", "oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'", "topo-aware-scheduler", "oc get noderesourcetopologies.topology.node.k8s.io", "NAME AGE compute-0.example.com 17h compute-1.example.com 17h", "oc get noderesourcetopologies.topology.node.k8s.io -o yaml", "apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:38Z\" generation: 63760 name: worker-0 resourceVersion: \"8450223\" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262352048128\" available: \"262352048128\" capacity: \"270107316224\" name: memory - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269231067136\" available: \"269231067136\" capacity: \"270573244416\" name: memory - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:37Z\" generation: 62061 name: worker-1 resourceVersion: \"8450129\" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262391033856\" available: \"262391033856\" capacity: \"270146301952\" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269192085504\" available: \"269192085504\" capacity: \"270534262784\" name: memory type: Node kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc get NUMAResourcesScheduler", "NAME AGE numaresourcesscheduler 92m", "oc delete NUMAResourcesScheduler numaresourcesscheduler", "numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.16\" cacheResyncPeriod: \"5s\" 1", "oc create -f nro-scheduler-cacheresync.yaml", "numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created", "oc get crd | grep numaresourcesschedulers", "NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z", "oc get numaresourcesschedulers.nodetopology.openshift.io", "NAME AGE numaresourcesscheduler 3h26m", "oc get pods -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m", "oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources", "I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"", "oc get NUMAResourcesScheduler", "NAME AGE numaresourcesscheduler 92m", "oc delete NUMAResourcesScheduler numaresourcesscheduler", "numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v{product-version}\" scoringStrategy: type: \"MostAllocated\" 1", "oc create -f nro-scheduler-mostallocated.yaml", "numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created", "oc get crd | grep numaresourcesschedulers", "NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z", "oc get numaresourcesschedulers.nodetopology.openshift.io", "NAME AGE numaresourcesscheduler 3h26m", "oc get -n openshift-numaresources cm topo-aware-scheduler-config -o yaml | grep scoring -A 1", "scoringStrategy: type: MostAllocated", "oc get NUMAResourcesScheduler", "NAME AGE numaresourcesscheduler 90m", "oc delete NUMAResourcesScheduler numaresourcesscheduler", "numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.16\" logLevel: Debug", "oc create -f nro-scheduler-debug.yaml", "numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created", "oc get crd | grep numaresourcesschedulers", "NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z", "oc get numaresourcesschedulers.nodetopology.openshift.io", "NAME AGE numaresourcesscheduler 3h26m", "oc get pods -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m", "oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources", "I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"", "oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath=\"{.status.daemonsets[0]}\"", "{\"name\":\"numaresourcesoperator-worker\",\"namespace\":\"openshift-numaresources\"}", "oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath=\"{.spec.selector.matchLabels}\"", "{\"name\":\"resource-topology\"}", "oc get pods -n openshift-numaresources -l name=resource-topology -o wide", "NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com", "oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c", "I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: \"0\": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved \"0-1\" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online \"0-103\" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable \"2-103\" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi", "Info: couldn't find configuration in \"/etc/resource-topology-exporter/config.yaml\"", "oc get configmap", "NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h", "oc get kubeletconfig -o yaml", "machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled", "oc get mcp worker -o yaml", "labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\"", "oc edit mcp worker -o yaml", "labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\" cnf-worker-tuning: enabled", "oc get configmap", "NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h", "oc adm must-gather --image=registry.redhat.io/numaresources-must-gather/numaresources-must-gather-rhel9:v4.16", "oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"router\",\"livenessProbe\":{\"timeoutSeconds\":5},\"readinessProbe\":{\"timeoutSeconds\":5}}]}}}}'", "oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness: Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:1936/healthz/ready delay=0s timeout=5s period=10s #success=1 #failure=3", "oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"tuningOptions\":{\"reloadInterval\":\"15s\"}}}'", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-kubens-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-kubens-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service", "oc apply -f mount_namespace_config.yaml", "machineconfig.machineconfiguration.openshift.io/99-kubens-master created machineconfig.machineconfiguration.openshift.io/99-kubens-worker created", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-03d4bc4befb0f4ed3566a2c8f7636751 False True False 3 0 0 0 45m worker rendered-worker-10577f6ab0117ed1825f8af2ac687ddf False True False 3 1 1", "oc wait --for=condition=Updated mcp --all --timeout=30m", "machineconfigpool.machineconfiguration.openshift.io/master condition met machineconfigpool.machineconfiguration.openshift.io/worker condition met", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# readlink /proc/1/ns/mnt", "mnt:[4026531953]", "sh-4.4# readlink /proc/USD(pgrep kubelet)/ns/mnt", "mnt:[4026531840]", "sh-4.4# readlink /proc/USD(pgrep crio)/ns/mnt", "mnt:[4026531840]", "ssh core@<node_name>", "[core@control-plane-1 ~]USD sudo kubensenter findmnt", "kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt TARGET SOURCE FSTYPE OPTIONS / /dev/sda4[/ostree/deploy/rhcos/deploy/32074f0e8e5ec453e56f5a8a7bc9347eaa4172349ceab9c22b709d9d71a3f4b0.0] | xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota shm tmpfs", "[core@control-plane-1 ~]USD sudo kubensenter", "kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt", "[Unit] Description=Example service [Service] ExecStart=/usr/bin/kubensenter /path/to/original/command arg1 arg2", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address>", "oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>'", "oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached'", "oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-'", "oc get baremetalhosts -n openshift-machine-api -o jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.provisioning.state}{\"\\n\"}{end}'", "master-0.example.com managed master-1.example.com managed master-2.example.com managed worker-0.example.com managed worker-1.example.com managed worker-2.example.com managed", "oc adm cordon <bare_metal_host> 1", "oc adm drain <bare_metal_host> --force=true", "oc patch <bare_metal_host> --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/online\", \"value\": false}]'", "oc adm uncordon <bare_metal_host>", "apiVersion: v1 kind: Namespace metadata: name: openshift-bare-metal-events labels: name: openshift-bare-metal-events openshift.io/cluster-monitoring: \"true\"", "oc create -f bare-metal-events-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: bare-metal-event-relay-group namespace: openshift-bare-metal-events spec: targetNamespaces: - openshift-bare-metal-events", "oc create -f bare-metal-events-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: bare-metal-event-relay-subscription namespace: openshift-bare-metal-events spec: channel: \"stable\" name: bare-metal-event-relay source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f bare-metal-events-sub.yaml", "oc get csv -n openshift-bare-metal-events -o custom-columns=Name:.metadata.name,Phase:.status.phase", "oc get pods -n amq-interconnect", "NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h", "oc get pods -n openshift-bare-metal-events", "NAME READY STATUS RESTARTS AGE hw-event-proxy-operator-controller-manager-74d5649b7c-dzgtl 2/2 Running 0 25s", "curl https://<bmc_ip_address>/redfish/v1/EventService --insecure -H 'Content-Type: application/json' -u \"<bmc_username>:<password>\"", "{ \"@odata.context\": \"/redfish/v1/USDmetadata#EventService.EventService\", \"@odata.id\": \"/redfish/v1/EventService\", \"@odata.type\": \"#EventService.v1_0_2.EventService\", \"Actions\": { \"#EventService.SubmitTestEvent\": { \"[email protected]\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"target\": \"/redfish/v1/EventService/Actions/EventService.SubmitTestEvent\" } }, \"DeliveryRetryAttempts\": 3, \"DeliveryRetryIntervalSeconds\": 30, \"Description\": \"Event Service represents the properties for the service\", \"EventTypesForSubscription\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"[email protected]\": 5, \"Id\": \"EventService\", \"Name\": \"Event Service\", \"ServiceEnabled\": true, \"Status\": { \"Health\": \"OK\", \"HealthRollup\": \"OK\", \"State\": \"Enabled\" }, \"Subscriptions\": { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\" } }", "oc get route -n openshift-bare-metal-events", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hw-event-proxy hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com hw-event-proxy-service 9087 edge None", "apiVersion: metal3.io/v1alpha1 kind: BMCEventSubscription metadata: name: sub-01 namespace: openshift-machine-api spec: hostName: <hostname> 1 destination: <proxy_service_url> 2 context: ''", "oc create -f bmc_sub.yaml", "oc delete -f bmc_sub.yaml", "curl -i -k -X POST -H \"Content-Type: application/json\" -d '{\"Destination\": \"https://<proxy_service_url>\", \"Protocol\" : \"Redfish\", \"EventTypes\": [\"Alert\"], \"Context\": \"root\"}' -u <bmc_username>:<password> 'https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions' -v", "HTTP/1.1 201 Created Server: AMI MegaRAC Redfish Service Location: /redfish/v1/EventService/Subscriptions/1 Allow: GET, POST Access-Control-Allow-Origin: * Access-Control-Expose-Headers: X-Auth-Token Access-Control-Allow-Headers: X-Auth-Token Access-Control-Allow-Credentials: true Cache-Control: no-cache, must-revalidate Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json>; rel=describedby Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json> Link: </redfish/v1/EventService/Subscriptions>; path= ETag: \"1651135676\" Content-Type: application/json; charset=UTF-8 OData-Version: 4.0 Content-Length: 614 Date: Thu, 28 Apr 2022 08:47:57 GMT", "curl --globoff -H \"Content-Type: application/json\" -k -X GET --user <bmc_username>:<password> https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 435 100 435 0 0 399 0 0:00:01 0:00:01 --:--:-- 399 { \"@odata.context\": \"/redfish/v1/USDmetadata#EventDestinationCollection.EventDestinationCollection\", \"@odata.etag\": \"\" 1651137375 \"\", \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\", \"@odata.type\": \"#EventDestinationCollection.EventDestinationCollection\", \"Description\": \"Collection for Event Subscriptions\", \"Members\": [ { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions/1\" }], \"[email protected]\": 1, \"Name\": \"Event Subscriptions Collection\" }", "curl --globoff -L -w \"%{http_code} %{url_effective}\\n\" -k -u <bmc_username>:<password >-H \"Content-Type: application/json\" -d '{}' -X DELETE https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions/1", "apiVersion: \"event.redhat-cne.org/v1alpha1\" kind: \"HardwareEvent\" metadata: name: \"hardware-event\" spec: nodeSelector: node-role.kubernetes.io/hw-event: \"\" 1 logLevel: \"debug\" 2 msgParserTimeout: \"10\" 3", "oc create -f hardware-event.yaml", "apiVersion: v1 kind: Secret metadata: name: redfish-basic-auth type: Opaque stringData: 1 username: <bmc_username> password: <bmc_password> # BMC host DNS or IP address hostaddr: <bmc_host_ip_address>", "oc create -f hw-event-bmc-secret.yaml", "[ { \"id\": \"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\": \"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" } ]", "{ \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }", "{ \"id\":\"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\":\"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\":\"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\":\"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }", "OK", "containers: - name: cloud-event-sidecar image: cloud-event-sidecar args: - \"--metrics-addr=127.0.0.1:9091\" - \"--store-path=/store\" - \"--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" - \"--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\" 1 - \"--api-port=8089\"", "apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: \"true\" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP", "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages", "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- labels: app: hugepages-example spec: containers: - securityContext: capabilities: add: [ \"IPC_LOCK\" ] image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage - mountPath: /etc/podinfo name: podinfo resources: limits: hugepages-1Gi: 2Gi memory: \"1Gi\" cpu: \"1\" requests: hugepages-1Gi: 2Gi env: - name: REQUESTS_HUGEPAGES_1GI <.> valueFrom: resourceFieldRef: containerName: example resource: requests.hugepages-1Gi volumes: - name: hugepage emptyDir: medium: HugePages - name: podinfo downwardAPI: items: - path: \"hugepages_1G_request\" <.> resourceFieldRef: containerName: example resource: requests.hugepages-1Gi divisor: 1Gi", "oc create -f hugepages-volume-pod.yaml", "oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- env | grep REQUESTS_HUGEPAGES_1GI", "REQUESTS_HUGEPAGES_1GI=2147483648", "oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- cat /etc/podinfo/hugepages_1G_request", "2", "oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages", "oc create -f hugepages-tuned-boottime.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"", "oc create -f hugepages-mcp.yaml", "oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: thp-workers-profile namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom tuned profile for OpenShift to turn off THP on worker nodes include=openshift-node [vm] transparent_hugepages=never name: openshift-thp-never-worker recommend: - match: - label: node-role.kubernetes.io/worker priority: 25 profile: openshift-thp-never-worker", "oc create -f thp-disable-tuned.yaml", "oc get profile -n openshift-cluster-node-tuning-operator", "cat /sys/kernel/mm/transparent_hugepage/enabled", "always madvise [never]", "oc label node <node_name> node-role.kubernetes.io/worker-cnf=\"\" 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-cnf 1 labels: machineconfiguration.openshift.io/role: worker-cnf 2 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-cnf], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-cnf: \"\" 3", "oc apply -f mcp-worker-cnf.yaml", "machineconfigpool.machineconfiguration.openshift.io/worker-cnf created", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-58433c7c3c1b4ed5ffef95234d451490 True False False 3 3 3 0 6h46m worker rendered-worker-168f52b168f151e4f853259729b6azc4 True False False 2 2 2 0 6h46m worker-cnf rendered-worker-cnf-168f52b168f151e4f853259729b6azc4 True False False 1 1 1 0 73s", "oc adm must-gather", "tar cvaf must-gather.tar.gz <must_gather_folder> 1", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-58433c8c3c0b4ed5feef95434d455490 True False False 3 3 3 0 8h worker rendered-worker-668f56a164f151e4a853229729b6adc4 True False False 2 2 2 0 8h worker-cnf rendered-worker-cnf-668f56a164f151e4a853229729b6adc4 True False False 1 1 1 0 79m", "podman login registry.redhat.io", "Username: <user_name> Password: <password>", "podman run --rm --entrypoint performance-profile-creator registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 -h", "A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --offlined-cpu-count int Number of offlined CPUs --per-pod-power-management Enable Per Pod Power Management --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled", "podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 --info log --must-gather-dir-path /must-gather", "level=info msg=\"Cluster info:\" level=info msg=\"MCP 'master' nodes:\" level=info msg=--- level=info msg=\"MCP 'worker' nodes:\" level=info msg=\"Node: host.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=\"Node: host1.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=--- level=info msg=\"MCP 'worker-cnf' nodes:\" level=info msg=\"Node: host2.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=---", "podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml", "level=info msg=\"Nodes targeted by worker-cnf MCP are: [worker-2]\" level=info msg=\"NUMA cell(s): 1\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=\"1 reserved CPUs allocated: 0 \" level=info msg=\"2 isolated CPUs allocated: 2-3\" level=info msg=\"Additional Kernel Args based on configuration: []\"", "cat my-performance-profile.yaml", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-3 offlined: \"1\" reserved: \"0\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true", "oc apply -f my-performance-profile.yaml", "performanceprofile.performance.openshift.io/performance created", "vi run-perf-profile-creator.sh", "#!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename \"USD0\") readonly CMD=\"USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator\" readonly IMG_EXISTS_CMD=\"USD{CONTAINER_RUNTIME} image exists\" readonly IMG_PULL_CMD=\"USD{CONTAINER_RUNTIME} image pull\" readonly MUST_GATHER_VOL=\"/must-gather\" NTO_IMG=\"registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16\" MG_TARBALL=\"\" DATA_DIR=\"\" usage() { print \"Wrapper usage:\" print \" USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]\" print \"\" print \"Options:\" print \" -h help for USD{CURRENT_SCRIPT}\" print \" -p Node Tuning Operator image\" print \" -t path to a must-gather tarball\" USD{IMG_EXISTS_CMD} \"USD{NTO_IMG}\" && USD{CMD} \"USD{NTO_IMG}\" -h } function cleanup { [ -d \"USD{DATA_DIR}\" ] && rm -rf \"USD{DATA_DIR}\" } trap cleanup EXIT exit_error() { print \"error: USD*\" usage exit 1 } print() { echo \"USD*\" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} \"USD{NTO_IMG}\" || USD{IMG_PULL_CMD} \"USD{NTO_IMG}\" || exit_error \"Node Tuning Operator image not found\" [ -n \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file path is mandatory\" [ -f \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file not found\" DATA_DIR=USD(mktemp -d -t \"USD{CURRENT_SCRIPT}XXXX\") || exit_error \"Cannot create the data directory\" tar -zxf \"USD{MG_TARBALL}\" --directory \"USD{DATA_DIR}\" || exit_error \"Cannot decompress the must-gather tarball\" chmod a+rx \"USD{DATA_DIR}\" return 0 } main() { while getopts ':hp:t:' OPT; do case \"USD{OPT}\" in h) usage exit 0 ;; p) NTO_IMG=\"USD{OPTARG}\" ;; t) MG_TARBALL=\"USD{OPTARG}\" ;; ?) exit_error \"invalid argument: USD{OPTARG}\" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v \"USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z\" \"USD{NTO_IMG}\" \"USD@\" --must-gather-dir-path \"USD{MUST_GATHER_VOL}\" echo \"\" 1>&2 } main \"USD@\"", "chmod a+x run-perf-profile-creator.sh", "podman login registry.redhat.io", "Username: <user_name> Password: <password>", "./run-perf-profile-creator.sh -h", "Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Node Tuning Operator image -t path to a must-gather tarball A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --offlined-cpu-count int Number of offlined CPUs --per-pod-power-management Enable Per Pod Power Management --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled --enable-hardware-tuning Enable setting maximum CPU frequencies", "./run-perf-profile-creator.sh -t /<path_to_must_gather_dir>/must-gather.tar.gz -- --info=log", "level=info msg=\"Cluster info:\" level=info msg=\"MCP 'master' nodes:\" level=info msg=--- level=info msg=\"MCP 'worker' nodes:\" level=info msg=\"Node: host.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=\"Node: host1.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=--- level=info msg=\"MCP 'worker-cnf' nodes:\" level=info msg=\"Node: host2.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=---", "./run-perf-profile-creator.sh -t /path-to-must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml", "cat my-performance-profile.yaml", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-3 offlined: \"1\" reserved: \"0\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true", "oc apply -f my-performance-profile.yaml", "performanceprofile.performance.openshift.io/performance created", "Error: failed to compute the reserved and isolated CPUs: please ensure that reserved-cpu-count plus offlined-cpu-count should be in the range [0,1]", "Error: failed to compute the reserved and isolated CPUs: please specify the offlined CPU count in the range [0,1]", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: cnf-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - default_hugepagesz=1GB - hugepagesz=1G - intel_iommu=on cpu: isolated: <CPU_ISOLATED> reserved: <CPU_RESERVED> hugepages: defaultHugepagesSize: 1G pages: - count: <HUGEPAGES_COUNT> node: 0 size: 1G nodeSelector: node-role.kubernetes.io/worker: '' realTimeKernel: enabled: false globallyDisableIrqLoadBalancing: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "workloadHints: highPowerConsumption: false realTime: false", "workloadHints: highPowerConsumption: false realTime: true", "workloadHints: highPowerConsumption: true realTime: true", "workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: workload-hints spec: workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false 1", "podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node --must-gather-dir-path /must-gather --power-consumption-mode=low-latency \\ 1 --per-pod-power-management=true > my-performance-profile.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: [.....] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - cpufreq.default_governor=schedutil 1", "spec: profile: - data: | [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct = <x> 1", "\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: \"0-4,9\" 1 isolated: \"5-8\" 2 nodeSelector: 3 node-role.kubernetes.io/worker: \"\"", "lscpu --all --extended", "CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000", "cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list", "0-4", "cpu: isolated: 0,4 reserved: 1-3,5-7", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - nosmt cpu: isolated: 2-3 reserved: 0-1 hugepages: defaultHugepagesSize: 1G pages: - count: 2 node: 0 size: 1G nodeSelector: node-role.kubernetes.io/performance: '' realTimeKernel: enabled: true", "find /proc/irq -name effective_affinity -printf \"%p: \" -exec cat {} \\;", "/proc/irq/0/effective_affinity: 1 /proc/irq/1/effective_affinity: 8 /proc/irq/2/effective_affinity: 0 /proc/irq/3/effective_affinity: 1 /proc/irq/4/effective_affinity: 2 /proc/irq/5/effective_affinity: 1 /proc/irq/6/effective_affinity: 1 /proc/irq/7/effective_affinity: 1 /proc/irq/8/effective_affinity: 1 /proc/irq/9/effective_affinity: 2 /proc/irq/10/effective_affinity: 1 /proc/irq/11/effective_affinity: 1 /proc/irq/12/effective_affinity: 4 /proc/irq/13/effective_affinity: 1 /proc/irq/14/effective_affinity: 1 /proc/irq/15/effective_affinity: 1 /proc/irq/24/effective_affinity: 2 /proc/irq/25/effective_affinity: 4 /proc/irq/26/effective_affinity: 2 /proc/irq/27/effective_affinity: 1 /proc/irq/28/effective_affinity: 8 /proc/irq/29/effective_affinity: 4 /proc/irq/30/effective_affinity: 4 /proc/irq/31/effective_affinity: 8 /proc/irq/32/effective_affinity: 8 /proc/irq/33/effective_affinity: 1 /proc/irq/34/effective_affinity: 2", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: dynamic-irq-profile spec: cpu: isolated: 2-5 reserved: 0-1", "hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 4 node: 0 1", "oc debug node/ip-10-0-141-105.ec2.internal", "grep -i huge /proc/meminfo", "AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ##", "oc describe node worker-0.ocp4poc.example.com | grep -i huge", "hugepages-1g=true hugepages-###: ### hugepages-###: ###", "spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G", "oc edit -f <your_profile_name>.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - interfaceName: \"eth1\" - vendorID: \"0x1af4\" deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth*\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"!eno1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - vendorID: \"0x1af4\" deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "oc apply -f <your_profile_name>.yaml", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true", "ethtool -l <device>", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - vendorID = 0x1af4", "ethtool -l <device>", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "udevadm info -p /sys/class/net/ens4 E: ID_MODEL_ID=0x1000 E: ID_VENDOR_ID=0x1af4 E: INTERFACE=ens4", "udevadm info -p /sys/class/net/eth0 E: ID_MODEL_ID=0x1002 E: ID_VENDOR_ID=0x1001 E: INTERFACE=eth0", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - interfaceName = eth0 - vendorID = 0x1af4", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3", "WARNING tuned.plugins.base: instance net_test: no matching devices available", "apiVersion: v1 kind: Pod metadata: name: dynamic-low-latency-pod annotations: cpu-quota.crio.io: \"disable\" 1 cpu-load-balancing.crio.io: \"disable\" 2 irq-load-balancing.crio.io: \"disable\" 3 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: dynamic-low-latency-pod image: \"registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16\" command: [\"sleep\", \"10h\"] resources: requests: cpu: 2 memory: \"200M\" limits: cpu: 2 memory: \"200M\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" 4 runtimeClassName: performance-dynamic-low-latency-profile 5", "oc get pod -o wide", "NAME READY STATUS RESTARTS AGE IP NODE dynamic-low-latency-pod 1/1 Running 0 5h33m 10.131.0.10 cnf-worker.example.com", "oc exec -it dynamic-low-latency-pod -- /bin/bash -c \"grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'\"", "Cpus_allowed_list: 2-3", "oc debug node/<node-name>", "sh-4.4# chroot /host", "sh-4.4#", "sh-4.4# cat /proc/irq/default_smp_affinity", "33", "sh-4.4# find /proc/irq/ -name smp_affinity_list -exec sh -c 'i=\"USD1\"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \\;", "/proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5", "apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: \"200Mi\" cpu: \"1\" requests: memory: \"200Mi\" cpu: \"1\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc apply -f qos-pod.yaml --namespace=qos-example", "oc get pod qos-demo --namespace=qos-example --output=yaml", "spec: containers: status: qosClass: Guaranteed", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile status: runtimeClass: performance-manual", "apiVersion: v1 kind: Pod metadata: # annotations: # cpu-load-balancing.crio.io: \"disable\" # # spec: # runtimeClassName: performance-<profile_name> #", "apiVersion: v1 kind: Pod metadata: # annotations: # cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"performance\" # # spec: # runtimeClassName: performance-<profile_name> #", "apiVersion: v1 kind: Pod metadata: annotations: cpu-quota.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name> #", "apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: irq-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>", "Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h", "oc describe mcp worker-cnf", "Message: Node node-worker-cnf is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\" Reason: 1 nodes are reporting degraded status on sync", "oc describe performanceprofiles performance", "Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\". Reason: MCPDegraded Status: True Type: Degraded", "oc adm must-gather", "[must-gather ] OUT Using must-gather plug-in image: quay.io/openshift-release When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at \"<cluster_version>\" ClusterOperators: All healthy and stable [must-gather ] OUT namespace/openshift-must-gather-8fh4x created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-rhlgc created [must-gather-5564g] POD 2023-07-17T10:17:37.610340849Z Gathering data for ns/openshift-cluster-version [must-gather-5564g] POD 2023-07-17T10:17:38.786591298Z Gathering data for ns/default [must-gather-5564g] POD 2023-07-17T10:17:39.117418660Z Gathering data for ns/openshift [must-gather-5564g] POD 2023-07-17T10:17:39.447592859Z Gathering data for ns/kube-system [must-gather-5564g] POD 2023-07-17T10:17:39.803381143Z Gathering data for ns/openshift-etcd Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at \"<cluster_version>\" ClusterOperators: All healthy and stable", "tar cvaf must-gather.tar.gz must-gather-local.5421342344627712289 1", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.focus=\"hwlatdetect\" --ginkgo.v --ginkgo.timeout=\"24h\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect I0908 15:25:20.023712 27 request.go:601] Waited for 1.046586367s due to client-side throttling, not priority and fairness, request: GET:https://api.hlxcl6.lab.eng.tlv2.redhat.com:6443/apis/imageregistry.operator.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662650718 Will run 1 of 3 specs [...] • Failure [283.574 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the hwlatdetect image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:228 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:236 Log file created at: 2022/09/08 15:25:27 Running on machine: hwlatdetect-b6n4n Binary: Built with gc go1.17.12 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0908 15:25:27.160620 1 node.go:39] Environment information: /proc/cmdline: BOOT_IMAGE=(hd1,gpt3)/ostree/rhcos-c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/vmlinuz-4.18.0-372.19.1.el8_6.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/0 ip=dhcp root=UUID=5f80c283-f6e6-4a27-9b47-a287157483b2 rw rootflags=prjquota boot=UUID=773bf59a-bafd-48fc-9a87-f62252d739d3 skew_tick=1 nohz=on rcu_nocbs=0-3 tuned.non_isolcpus=0000ffff,ffffffff,fffffff0 systemd.cpu_affinity=4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79 intel_iommu=on iommu=pt isolcpus=managed_irq,0-3 nohz_full=0-3 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off skew_tick=1 rcutree.kthread_prio=11 + + I0908 15:25:27.160830 1 node.go:46] Environment information: kernel version 4.18.0-372.19.1.el8_6.x86_64 I0908 15:25:27.160857 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 100 --window 10000000us --width 950000us] F0908 15:27:10.603523 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 100 seconds detector: tracer parameters: Latency threshold: 1us 1 Sample window: 10000000us Sample width: 950000us Non-sampling period: 9050000us Output File: None Starting test test finished Max Latency: 326us 2 Samples recorded: 5 Samples exceeding threshold: 5 ts: 1662650739.017274507, inner:6, outer:6 ts: 1662650749.257272414, inner:14, outer:326 ts: 1662650779.977272835, inner:314, outer:12 ts: 1662650800.457272384, inner:3, outer:9 ts: 1662650810.697273520, inner:3, outer:2 [...] JUnit report was created: /junit.xml/cnftests-junit.xml Summarizing 1 Failure: [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:476 Ran 1 of 194 Specs in 365.797 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (366.08s) FAIL", "hwlatdetect: test duration 3600 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 950000us Non-sampling period: 50000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0", "hwlatdetect: test duration 3600 seconds detector: tracer parameters:Latency threshold: 10usSample window: 1000000us Sample width: 950000usNon-sampling period: 50000usOutput File: None Starting tests:1610542421.275784439, inner:78, outer:81 ts: 1610542444.330561619, inner:27, outer:28 ts: 1610542445.332549975, inner:39, outer:38 ts: 1610542541.568546097, inner:47, outer:32 ts: 1610542590.681548531, inner:13, outer:17 ts: 1610543033.818801482, inner:29, outer:30 ts: 1610543080.938801990, inner:90, outer:76 ts: 1610543129.065549639, inner:28, outer:39 ts: 1610543474.859552115, inner:28, outer:35 ts: 1610543523.973856571, inner:52, outer:49 ts: 1610543572.089799738, inner:27, outer:30 ts: 1610543573.091550771, inner:34, outer:28 ts: 1610543574.093555202, inner:116, outer:63", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.focus=\"cyclictest\" --ginkgo.v --ginkgo.timeout=\"24h\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=cyclictest I0908 13:01:59.193776 27 request.go:601] Waited for 1.046228824s due to client-side throttling, not priority and fairness, request: GET:https://api.compute-1.example.com:6443/apis/packages.operators.coreos.com/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662642118 Will run 1 of 3 specs [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the cyclictest image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:220 Ran 1 of 194 Specs in 161.151 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (161.48s) FAIL", "running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043 More histogram entries Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004 Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 Histogram Overflow at cycle number: Thread 0: Thread 1: Thread 2: Thread 3: Thread 4: Thread 5: Thread 6: Thread 7: Thread 8: Thread 9: Thread 10: Thread 11: Thread 12: Thread 13: Thread 14: Thread 15:", "running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518 More histogram entries Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520 Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002 Histogram Overflow at cycle number: Thread 0: 155922 Thread 1: 110064 Thread 2: 110064 Thread 3: 110063 155921 Thread 4: 110063 155921 Thread 5: 155920 Thread 6: Thread 7: 110062 Thread 8: 110062 Thread 9: 155919 Thread 10: 110061 155919 Thread 11: 155918 Thread 12: 155918 Thread 13: 110060 Thread 14: 110060 Thread 15: 110059 155917", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.focus=\"oslat\" --ginkgo.v --ginkgo.timeout=\"24h\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=oslat I0908 12:51:55.999393 27 request.go:601] Waited for 1.044848101s due to client-side throttling, not priority and fairness, request: GET:https://compute-1.example.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662641514 Will run 1 of 3 specs [...] • Failure [77.833 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the oslat image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:128 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:153 The current latency 304 is bigger than the expected one 1 : 1 [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the oslat image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:177 Ran 1 of 194 Specs in 161.091 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (161.42s) FAIL", "podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/reportdest:<report_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --report <report_folder_path> --ginkgo.v", "podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/junit:/junit -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.junit-report junit/<file-name>.xml --ginkgo.v", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=<time_in_seconds> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f -", "run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=\"<disconnected_registry>\" -e CNF_TESTS_IMAGE=\"cnf-tests-rhel8:v4.16\" -e LATENCY_TEST_RUNTIME=<time_in_seconds> <disconnected_registry>/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=\"<custom_image_registry>\" -e CNF_TESTS_IMAGE=\"<custom_cnf-tests_image>\" -e LATENCY_TEST_RUNTIME=<time_in_seconds> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "oc create ns cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests", "SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'}", "TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath=\"{.data['\\.dockercfg']}\" | base64 --decode | jq '.[\"image-registry.openshift-image-registry.svc:5000\"].auth')", "echo \"{\\\"auths\\\": { \\\"USDREGISTRY\\\": { \\\"auth\\\": USDTOKEN } }}\" > dockerauth.json", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:4.16 /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true -a=USD(pwd)/dockerauth.json -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=<time_in_seconds> -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "[ { \"registry\": \"public.registry.io:5000\", \"image\": \"imageforcnftests:4.16\" } ]", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/mirror --registry \"my.local.registry:5000/\" --images \"/kubeconfig/images.json\" | oc image mirror -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 get nodes", "openshift-install create manifests --dir=<cluster-install-dir>", "vi <cluster-install-dir>/manifests/config-node-default-profile.yaml", "apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: workerLatencyProfile: \"Default\"", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1", "oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5", "- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"", "oc get KubeAPIServer -o yaml | grep -A 1 default-", "default-not-ready-toleration-seconds: - \"300\" default-unreachable-toleration-seconds: - \"300\"", "oc get KubeControllerManager -o yaml | grep -A 1 node-monitor", "node-monitor-grace-period: - 40s", "oc debug node/<worker-node-name> chroot /host cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency", "\"nodeStatusUpdateFrequency\": \"10s\"", "apiVersion: v1 baseDomain: devcluster.openshift.com cpuPartitioningMode: AllNodes 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "oc get packagemanifests -n openshift-marketplace node-observability-operator", "NAME CATALOG AGE node-observability-operator Red Hat Operators 9h", "oc new-project node-observability-operator", "cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: node-observability-operator namespace: node-observability-operator spec: targetNamespaces: [] EOF", "cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-observability-operator namespace: node-observability-operator spec: channel: alpha name: node-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc -n node-observability-operator get sub node-observability-operator -o yaml | yq '.status.installplan.name'", "install-dt54w", "oc -n node-observability-operator get ip <install_plan_name> -o yaml | yq '.status.phase'", "COMPLETE", "oc get deploy -n node-observability-operator", "NAME READY UP-TO-DATE AVAILABLE AGE node-observability-operator-controller-manager 1/1 1 1 40h", "oc login -u kubeadmin https://<HOSTNAME>:6443", "oc project node-observability-operator", "apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservability metadata: name: cluster 1 spec: nodeSelector: kubernetes.io/hostname: <node_hostname> 2 type: crio-kubelet", "apply -f nodeobservability.yaml", "nodeobservability.olm.openshift.io/cluster created", "oc get nob/cluster -o yaml | yq '.status.conditions'", "conditions: conditions: - lastTransitionTime: \"2022-07-05T07:33:54Z\" message: 'DaemonSet node-observability-ds ready: true NodeObservabilityMachineConfig ready: true' reason: Ready status: \"True\" type: Ready", "apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservabilityRun metadata: name: nodeobservabilityrun spec: nodeObservabilityRef: name: cluster", "oc apply -f nodeobservabilityrun.yaml", "oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq '.status.conditions'", "conditions: - lastTransitionTime: \"2022-07-07T14:57:34Z\" message: Ready to start profiling reason: Ready status: \"True\" type: Ready - lastTransitionTime: \"2022-07-07T14:58:10Z\" message: Profiling query done reason: Finished status: \"True\" type: Finished", "for a in USD(oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq .status.agents[].name); do echo \"agent USD{a}\" mkdir -p \"/tmp/USD{a}\" for p in USD(oc exec \"USD{a}\" -c node-observability-agent -- bash -c \"ls /run/node-observability/*.pprof\"); do f=\"USD(basename USD{p})\" echo \"copying USD{f} to /tmp/USD{a}/USD{f}\" oc exec \"USD{a}\" -c node-observability-agent -- cat \"USD{p}\" > \"/tmp/USD{a}/USD{f}\" done done", "oc login -u kubeadmin https://<host_name>:6443", "oc project node-observability-operator", "apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservability metadata: name: cluster 1 spec: nodeSelector: kubernetes.io/hostname: <node_hostname> 2 type: scripting 3", "oc apply -f nodeobservability.yaml", "nodeobservability.olm.openshift.io/cluster created", "oc get nob/cluster -o yaml | yq '.status.conditions'", "conditions: conditions: - lastTransitionTime: \"2022-07-05T07:33:54Z\" message: 'DaemonSet node-observability-ds ready: true NodeObservabilityScripting ready: true' reason: Ready status: \"True\" type: Ready", "apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservabilityRun metadata: name: nodeobservabilityrun-script namespace: node-observability-operator spec: nodeObservabilityRef: name: cluster type: scripting", "oc apply -f nodeobservabilityrun-script.yaml", "oc get nodeobservabilityrun nodeobservabilityrun-script -o yaml | yq '.status.conditions'", "Status: Agents: Ip: 10.128.2.252 Name: node-observability-agent-n2fpm Port: 8443 Ip: 10.131.0.186 Name: node-observability-agent-wcc8p Port: 8443 Conditions: Conditions: Last Transition Time: 2023-12-19T15:10:51Z Message: Ready to start profiling Reason: Ready Status: True Type: Ready Last Transition Time: 2023-12-19T15:11:01Z Message: Profiling query done Reason: Finished Status: True Type: Finished Finished Timestamp: 2023-12-19T15:11:01Z Start Timestamp: 2023-12-19T15:10:51Z", "#!/bin/bash RUN=USD(oc get nodeobservabilityrun --no-headers | awk '{print USD1}') for a in USD(oc get nodeobservabilityruns.nodeobservability.olm.openshift.io/USD{RUN} -o json | jq .status.agents[].name); do echo \"agent USD{a}\" agent=USD(echo USD{a} | tr -d \"\\\"\\'\\`\") base_dir=USD(oc exec \"USD{agent}\" -c node-observability-agent -- bash -c \"ls -t | grep node-observability-agent\" | head -1) echo \"USD{base_dir}\" mkdir -p \"/tmp/USD{agent}\" for p in USD(oc exec \"USD{agent}\" -c node-observability-agent -- bash -c \"ls USD{base_dir}\"); do f=\"/USD{base_dir}/USD{p}\" echo \"copying USD{f} to /tmp/USD{agent}/USD{p}\" oc exec \"USD{agent}\" -c node-observability-agent -- cat USD{f} > \"/tmp/USD{agent}/USD{p}\" done done" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/scalability_and_performance/index
Chapter 8. OperatorGroup [operators.coreos.com/v1]
Chapter 8. OperatorGroup [operators.coreos.com/v1] Description OperatorGroup is the unit of multitenancy for OLM managed operators. It constrains the installation of operators in its namespace to a specified set of target namespaces. Type object Required metadata 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorGroupSpec is the spec for an OperatorGroup resource. status object OperatorGroupStatus is the status for an OperatorGroupResource. 8.1.1. .spec Description OperatorGroupSpec is the spec for an OperatorGroup resource. Type object Property Type Description selector object Selector selects the OperatorGroup's target namespaces. serviceAccountName string ServiceAccountName is the admin specified service account which will be used to deploy operator(s) in this operator group. staticProvidedAPIs boolean Static tells OLM not to update the OperatorGroup's providedAPIs annotation targetNamespaces array (string) TargetNamespaces is an explicit set of namespaces to target. If it is set, Selector is ignored. upgradeStrategy string UpgradeStrategy defines the upgrade strategy for operators in the namespace. There are currently two supported upgrade strategies: Default: OLM will only allow clusterServiceVersions to move to the replacing phase from the succeeded phase. This effectively means that OLM will not allow operators to move to the version if an installation or upgrade has failed. TechPreviewUnsafeFailForward: OLM will allow clusterServiceVersions to move to the replacing phase from the succeeded phase or from the failed phase. Additionally, OLM will generate new installPlans when a subscription references a failed installPlan and the catalog has been updated with a new upgrade for the existing set of operators. WARNING: The TechPreviewUnsafeFailForward upgrade strategy is unsafe and may result in unexpected behavior or unrecoverable data loss unless you have deep understanding of the set of operators being managed in the namespace. 8.1.2. .spec.selector Description Selector selects the OperatorGroup's target namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.3. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.4. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.5. .status Description OperatorGroupStatus is the status for an OperatorGroupResource. Type object Required lastUpdated Property Type Description conditions array Conditions is an array of the OperatorGroup's conditions. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastUpdated string LastUpdated is a timestamp of the last time the OperatorGroup's status was Updated. namespaces array (string) Namespaces is the set of target namespaces for the OperatorGroup. serviceAccountRef object ServiceAccountRef references the service account object specified. 8.1.6. .status.conditions Description Conditions is an array of the OperatorGroup's conditions. Type array 8.1.7. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 8.1.8. .status.serviceAccountRef Description ServiceAccountRef references the service account object specified. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 8.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1/operatorgroups GET : list objects of kind OperatorGroup /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups DELETE : delete collection of OperatorGroup GET : list objects of kind OperatorGroup POST : create an OperatorGroup /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups/{name} DELETE : delete an OperatorGroup GET : read the specified OperatorGroup PATCH : partially update the specified OperatorGroup PUT : replace the specified OperatorGroup /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups/{name}/status GET : read status of the specified OperatorGroup PATCH : partially update status of the specified OperatorGroup PUT : replace status of the specified OperatorGroup 8.2.1. /apis/operators.coreos.com/v1/operatorgroups HTTP method GET Description list objects of kind OperatorGroup Table 8.1. HTTP responses HTTP code Reponse body 200 - OK OperatorGroupList schema 401 - Unauthorized Empty 8.2.2. /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups HTTP method DELETE Description delete collection of OperatorGroup Table 8.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorGroup Table 8.3. HTTP responses HTTP code Reponse body 200 - OK OperatorGroupList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorGroup Table 8.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.5. Body parameters Parameter Type Description body OperatorGroup schema Table 8.6. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 201 - Created OperatorGroup schema 202 - Accepted OperatorGroup schema 401 - Unauthorized Empty 8.2.3. /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups/{name} Table 8.7. Global path parameters Parameter Type Description name string name of the OperatorGroup HTTP method DELETE Description delete an OperatorGroup Table 8.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorGroup Table 8.10. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorGroup Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.12. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorGroup Table 8.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.14. Body parameters Parameter Type Description body OperatorGroup schema Table 8.15. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 201 - Created OperatorGroup schema 401 - Unauthorized Empty 8.2.4. /apis/operators.coreos.com/v1/namespaces/{namespace}/operatorgroups/{name}/status Table 8.16. Global path parameters Parameter Type Description name string name of the OperatorGroup HTTP method GET Description read status of the specified OperatorGroup Table 8.17. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OperatorGroup Table 8.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.19. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OperatorGroup Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.21. Body parameters Parameter Type Description body OperatorGroup schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK OperatorGroup schema 201 - Created OperatorGroup schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operatorhub_apis/operatorgroup-operators-coreos-com-v1
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/upgrading_sap_environments_from_rhel_7_to_rhel_8/conscious-language-message_upgrading-7-to-8
Chapter 7. Preparing to update a cluster with manually maintained credentials
Chapter 7. Preparing to update a cluster with manually maintained credentials The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. For minor releases, for example, from 4.12 to 4.13, this status prevents you from updating until you have addressed any updated permissions and annotated the CloudCredential resource to indicate that the permissions are updated as needed for the version. This annotation changes the Upgradable status to True . For z-stream releases, for example, from 4.13.0 to 4.13.1, no permissions are added or changed, so the update is not blocked. Before updating a cluster with manually maintained credentials, you must accommodate any new or changed credentials in the release image for the version of OpenShift Container Platform you are updating to. 7.1. Update requirements for clusters with manually maintained credentials Before you update a cluster that uses manually maintained credentials with the Cloud Credential Operator (CCO), you must update the cloud provider resources for the new release. If the cloud credential management for your cluster was configured using the CCO utility ( ccoctl ), use the ccoctl utility to update the resources. Clusters that were configured to use manual mode without the ccoctl utility require manual updates for the resources. After updating the cloud provider resources, you must update the upgradeable-to annotation for the cluster to indicate that it is ready to update. Note The process to update the cloud provider resources and the upgradeable-to annotation can only be completed by using command line tools. 7.1.1. Cloud credential configuration options and update requirements by platform type Some platforms only support using the CCO in one mode. For clusters that are installed on those platforms, the platform type determines the credentials update requirements. For platforms that support using the CCO in multiple modes, you must determine which mode the cluster is configured to use and take the required actions for that configuration. Figure 7.1. Credentials update requirements by platform type Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere These platforms do not support using the CCO in manual mode. Clusters on these platforms handle changes in cloud provider resources automatically and do not require an update to the upgradeable-to annotation. Administrators of clusters on these platforms should skip the manually maintained credentials section of the update process. IBM Cloud and Nutanix Clusters installed on these platforms are configured using the ccoctl utility. Administrators of clusters on these platforms must take the following actions: Configure the ccoctl utility for the new release. Use the ccoctl utility to update the cloud provider resources. Indicate that the cluster is ready to update with the upgradeable-to annotation. Microsoft Azure Stack Hub These clusters use manual mode with long-lived credentials and do not use the ccoctl utility. Administrators of clusters on these platforms must take the following actions: Manually update the cloud provider resources for the new release. Indicate that the cluster is ready to update with the upgradeable-to annotation. Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) Clusters installed on these platforms support multiple CCO modes. The required update process depends on the mode that the cluster is configured to use. If you are not sure what mode the CCO is configured to use on your cluster, you can use the web console or the CLI to determine this information. Additional resources Determining the Cloud Credential Operator mode by using the web console Determining the Cloud Credential Operator mode by using the CLI Configuring the Cloud Credential Operator utility for a cluster update Updating cloud provider resources with manually maintained credentials About the Cloud Credential Operator 7.1.2. Determining the Cloud Credential Operator mode by using the web console You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select CloudCredential . On the CloudCredential details page, select the YAML tab. In the YAML block, check the value of spec.credentialsMode . The following values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS or GCP cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating. An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials If you see one of these values, your cluster is using mint or passthrough mode with the root secret present. If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed. AWS or GCP clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select Authentication . On the Authentication details page, select the YAML tab. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter. A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with AWS STS or GCP Workload Identity to create and manage cloud credentials from outside of the cluster. These clusters are configured using the ccoctl utility. An empty value ( '' ) indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. steps If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the part of the update process. If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the part of the update process. If your cluster was configured using the CCO utility ( ccoctl ), you must take the following actions: Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Update the upgradeable-to annotation to indicate that the cluster is ready to update. If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions: Manually update the cloud provider resources for the new release. Update the upgradeable-to annotation to indicate that the cluster is ready to update. Additional resources Configuring the Cloud Credential Operator utility for a cluster update Updating cloud provider resources with manually maintained credentials 7.1.3. Determining the Cloud Credential Operator mode by using the CLI You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. To determine the mode that the CCO is configured to use, enter the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS or GCP cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating. An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command: USD oc get secret <secret_name> \ -n=kube-system where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster. AWS or GCP clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with AWS STS or GCP Workload Identity to create and manage cloud credentials from outside of the cluster. These clusters are configured using the ccoctl utility. An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. steps If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the part of the update process. If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the part of the update process. If your cluster was configured using the CCO utility ( ccoctl ), you must take the following actions: Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Update the upgradeable-to annotation to indicate that the cluster is ready to update. If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions: Manually update the cloud provider resources for the new release. Update the upgradeable-to annotation to indicate that the cluster is ready to update. Additional resources Configuring the Cloud Credential Operator utility for a cluster update Updating cloud provider resources with manually maintained credentials 7.2. Configuring the Cloud Credential Operator utility for a cluster update To upgrade a cluster that uses the Cloud Credential Operator (CCO) in manual mode to create and manage cloud credentials from outside of the cluster, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Your cluster was configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file by running the following command: USD ccoctl --help Output of ccoctl --help OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 7.3. Updating cloud provider resources with the Cloud Credential Operator utility The process for upgrading an OpenShift Container Platform cluster that was configured using the CCO utility ( ccoctl ) is similar to creating the cloud provider resources during installation. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. On AWS clusters, some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Obtain the OpenShift Container Platform release image for the version that you are upgrading to. Extract and prepare the ccoctl binary from the release image. Procedure Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract --credentials-requests \ --cloud=<provider_type> \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ quay.io/<path_to>/ocp-release:<version> where: <provider_type> is the value for your cloud provider. Valid values are alibabacloud , aws , gcp , ibmcloud , and nutanix . credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. For each CredentialsRequest CR in the release image, ensure that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. This field is where the generated secrets that hold the credentials configuration are stored. Sample AWS CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1 1 This field indicates the namespace which needs to exist to hold the generated secret. The CredentialsRequest CRs for other platforms have a similar format with different platform-specific values. For any CredentialsRequest CR for which the cluster does not already have a namespace with the name specified in spec.secretRef.namespace , create the namespace by running the following command: USD oc create namespace <component_namespace> Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory by running the command for your cloud provider. The following commands process CredentialsRequest objects: Alibaba Cloud: ccoctl alibabacloud create-ram-users Amazon Web Services (AWS): ccoctl aws create-iam-roles Google Cloud Platform (GCP): ccoctl gcp create-all IBM Cloud: ccoctl ibmcloud create-service-id Nutanix: ccoctl nutanix create-shared-secrets Important Refer to the ccoctl utility instructions in the installation content for your cloud provider for important platform-specific details about the required arguments and special considerations. For each CredentialsRequest object, ccoctl creates the required provider resources and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Apply the secrets to your cluster by running the following command: USD ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {} Verification You can verify that the required provider resources and permissions policies are created by querying the cloud provider. For more information, refer to your cloud provider documentation on listing roles or service accounts. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Creating Alibaba Cloud credentials for OpenShift Container Platform components with the ccoctl tool Creating AWS resources with the Cloud Credential Operator utility Creating GCP resources with the Cloud Credential Operator utility Manually creating IAM for IBM Cloud VPC Configuring IAM for Nutanix Indicating that the cluster is ready to upgrade 7.4. Updating cloud provider resources with manually maintained credentials Before upgrading a cluster with manually maintained credentials, you must create any new credentials for the release image that you are upgrading to. You must also review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components. Procedure Extract and examine the CredentialsRequest custom resource for the new release. The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud. Update the manually maintained credentials on your cluster: Create new secrets for any CredentialsRequest custom resources that are added by the new release image. If the CredentialsRequest custom resources for any existing credentials that are stored in secrets have changed permissions requirements, update the permissions as required. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.12 on AWS 0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6 1 The Machine API Operator CR is required. 2 The Cloud Credential Operator CR is required. 3 The Image Registry Operator CR is required. 4 The Ingress Operator CR is required. 5 The Network Operator CR is required. 6 The Storage Operator CR is an optional component and might be disabled in your cluster. Example credrequests directory contents for OpenShift Container Platform 4.12 on GCP 0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7 1 The Cloud Controller Manager Operator CR is required. 2 The Machine API Operator CR is required. 3 The Cloud Credential Operator CR is required. 4 The Image Registry Operator CR is required. 5 The Ingress Operator CR is required. 6 The Network Operator CR is required. 7 The Storage Operator CR is an optional component and might be disabled in your cluster. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Manually creating IAM for AWS Manually creating IAM for Azure Manually creating IAM for Azure Stack Hub Manually creating IAM for GCP Indicating that the cluster is ready to upgrade 7.5. Indicating that the cluster is ready to upgrade The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. Prerequisites For the release image that you are upgrading to, you have processed any new credentials manually or by using the Cloud Credential Operator utility ( ccoctl ). You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. Edit the CloudCredential resource to add an upgradeable-to annotation within the metadata field by running the following command: USD oc edit cloudcredential cluster Text to add ... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ... Where <version_number> is the version that you are upgrading to, in the format x.y.z . For example, use 4.12.2 for OpenShift Container Platform 4.12.2. It may take several minutes after adding the annotation for the upgradeable status to change. Verification In the Administrator perspective of the web console, navigate to Administration Cluster Settings . To view the CCO status details, click cloud-credential in the Cluster Operators list. If the Upgradeable status in the Conditions section is False , verify that the upgradeable-to annotation is free of typographical errors. When the Upgradeable status in the Conditions section is True , begin the OpenShift Container Platform upgrade.
[ "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "oc get secret <secret_name> -n=kube-system", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "ccoctl --help", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "oc adm release extract --credentials-requests --cloud=<provider_type> --to=<path_to_directory_with_list_of_credentials_requests>/credrequests quay.io/<path_to>/ocp-release:<version>", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1", "oc create namespace <component_namespace>", "ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}", "0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6", "0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/updating_clusters/preparing-manual-creds-update
Chapter 17. Deleting applications
Chapter 17. Deleting applications To delete an application via the Admin Portal, you need to follow these steps: Option 1 : Delete an application from the list of all applications for [Your_API_name]. In the Dashboard, click [Your_API_name] . Click the Overview tab. From the left panel on the Overview page, click Applications . Choose Listing . Click on an application. You will see a page containing details of the application. Click Edit . To delete the application, click Delete . You will see a confirmation message. Click Ok to confirm the deletion. Option 2 : Delete an application based on a specific application plan. In the Admin Portal, click Dashboard . Choose API . Under Published Application Plans , choose an application. Click on an application. You will see a page containing details of the application. Click Edit . To delete the application, click Delete . You will see a confirmation message. Click Ok to confirm the deletion. Alternatively, you can also delete an application via 3scale API Docs, with the operation called Application Delete .
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/delete-application
4.3. Setting Index Scan Limits
4.3. Setting Index Scan Limits In large directories, the search results list can get huge. A directory with a million inetorgperson entries would have a million entries that were returned with a filter like (objectclass=inetorgperson) , and an index for the sn attribute would have at least a million entries in it. Loading a long ID list from the database significantly reduces search performance. The configuration parameter, nsslapd-idlistscanlimit , sets a limit on the number of IDs that are read before a key is considered to match the entire primary index (meaning the search is treated as an unindexed search with a different set of resource limits). For large indexes, it is actually more efficient to treat any search which matches the index as an unindexed search. The search operation only has to look in one place to process results (the entire directory) rather than searching through an index that is nearly the size of a directory, plus the directory itself. The default value of the nsslapd-idlistscanlimit attribute is 4000 , which is gives good performance for a common range of database sizes and access patterns. It's usually not necessary to change this value. If the database index is slightly larger than the 4000 entries, but still significantly smaller than the overall directory, then raising the scan limit improves searches which would otherwise hit the default limit of 4000. On the other hand, lowering the limit can significantly speed up searches that would otherwise hit the 4000 entry limit, but where it is not necessary to scan every entry. 4.3.1. Setting an Index Scan Limit Using the Command Line To set an index scan limit using the command line: For example, to set the number of entry IDs that Directory Server searches during a search operation to 8000 : Restart the Directory Server instance: 4.3.2. Setting an Index Scan Limit Using the Web Console To set an index scan limit using the Web Console: Open the Directory Server user interface in the web console. For details, see Logging Into Directory Server Using the Web Console section in the Red Hat Directory Server Administration Guide . Select the instance. On the Database tab, select Global Database Configuration . Update the value in the ID List Scan Limit field. Click Save Configuration . Click the Actions button, and select Restart Instance .
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --idlistscanlimit=8000", "dsctl instance_name restart" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/setting-scan-limits
Chapter 14. Hardening the Dashboard service
Chapter 14. Hardening the Dashboard service The Dashboard service (horizon) gives users a self-service portal for provisioning their own resources within the limits set by administrators. Manage the security of the Dashboard service with the same sensitivity as the OpenStack APIs. 14.1. Debugging the Dashboard service The default value for the DEBUG parameter is False . Keep the default value in your production environment. Change this setting only during investigation. When you change the value of the DEBUG parameter to True , Django can output stack straces to browser users that contain sensitive web server state information. When the value of the DEBUG parameter is True , the ALLOWED_HOSTS settings are also disabled. For more information on configuring ALLOWED_HOSTS , see Configure ALLOWED_HOSTS . 14.2. Selecting a domain name It is a best practice to deploy the Dashboard service (horizon) to a second level domain, as opposed to a shared domain on any level. Examples of each are provided below: Second level domain: https://example.com Shared subdomain: https://example.public-url.com Deploying the Dashboard service to a dedicated second level domain isolates cookies and security tokens from other domains, based on browsers' same-origin policy. When deployed on a subdomain, the security of the Dashboard service is equivalent to the least secure application deployed on the same second-level domain. You can further mitigate this risk by avoiding a cookie-backed session store, and configuring HTTP Strict Transport Security (HSTS) (described in this guide). Note Deploying the Dashboard service on a bare domain, like https://example/ , is unsupported. 14.3. Configure ALLOWED_HOSTS Horizon is built on the python Django web framework, which requires protection against security threats associated with misleading HTTP Host headers. To apply this protection, configure the ALLOWED_HOSTS setting to use the FQDN that is served by the OpenStack dashboard. When you configure the ALLOWED_HOSTS setting, any HTTP request with a Host header that does not match the values in this list is denied, and an error is raised. Procedure Under parameter_defaults in your templates, set the value of the HorizonAllowedHosts parameter: Replace <value> with the FQDN that is served by the OpenStack dashboard. Deploy the overcloud with the modified template, and all other templates required for your environment. 14.4. Cross Site Scripting (XSS) The OpenStack Dashboard accepts the entire Unicode character set in most fields. Malicious actors can attempt to use this extensibility to test for cross-site scripting (XSS) vulnerabilities. The OpenStack Dashboard service (horizon) has tools that harden against XSS vulnerabilites. It is important to ensure the correct use of these tools in custom dashboards. When you perform an audit against custom dashboards, pay attention to the following: The mark_safe function. is_safe - when used with custom template tags. The safe template tag. Anywhere auto escape is turned off, and any JavaScript which might evaluate improperly escaped data. 14.5. Cross Site Request Forgery (CSRF) Dashboards that use multiple JavaScript instances should be audited for vulnerabilities such as inappropriate use of the @csrf_exempt decorator. Evaluate any dashboard that does not follow recommended security settings before lowering CORS (Cross Origin Resource Sharing) restrictions. Configure your web server to send a restrictive CORS header with each response. Allow only the dashboard domain and protocol, for example: Access-Control-Allow-Origin: https://example.com/ . You should never allow the wild card origin. 14.6. Allow iframe embedding The DISALLOW_IFRAME_EMBED setting disallows Dashboard from being embedded within an iframe. Legacy browsers can still be vulnerable to Cross-Frame Scripting (XFS) vulnerabilities, so this option adds extra security hardening for deployments that do not require iframes. The setting is set to True by default, however it can be disabled using an environment file, if needed. Procedure You can allow iframe embedding using the following parameter: Note These settings should only be set to False once the potential security impacts are fully understood. 14.7. Using HTTPS encryption for Dashboard traffic It is recommended you use HTTPS to encrypt Dashboard traffic. You can do this by configuring it to use a valid, trusted certificate from a recognized certificate authority (CA). Private organization-issued certificates are only appropriate when the root of trust is pre-installed in all user browsers. Configure HTTP requests to the dashboard domain to redirect to the fully qualified HTTPS URL. See Chapter 7, Enabling SSL/TLS on overcloud public endpoints . for more information. 14.8. HTTP Strict Transport Security (HSTS) HTTP Strict Transport Security (HSTS) prevents browsers from making subsequent insecure connections after they have initially made a secure connection. If you have deployed your HTTP services on a public or an untrusted zone, HSTS is especially important. For director-based deployments, this setting is enabled by default in the /usr/share/openstack-tripleo-heat-templates/deployment/horizon/horizon-container-puppet.yaml file: Verification After the overcloud is deployed, check the local_settings file for Red Hat OpenStack Dashboard (horizon) for verification. Use ssh to connect to a controller: USD ssh tripleo-admin@controller-0 Check that the SECURE_PROXY_SSL_HEADER parameter has a value of ('HTTP_X_FORWARDED_PROTO', 'https') : sudo egrep ^SECURE_PROXY_SSL_HEADER /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') 14.9. Front-end caching It is not recommended to use front-end caching tools with the Dashboard, as it renders dynamic content resulting directly from OpenStack API requests. As a result, front-end caching layers such as varnish can prevent the correct content from being displayed. The Dashboard uses Django, which serves static media directly served from the web service and already benefits from web host caching. 14.10. Session backend For director-based deployments, the default session backend for horizon is django.contrib.sessions.backends.cache , which is combined with memcached. This approach is preferred to local-memory cache for performance reasons, is safer for highly-available and load balanced installs, and has the ability to share cache over multiple servers, while still treating it as a single cache. You can review these settings in director's horizon.yaml file: 14.11. Reviewing the secret key The Dashboard depends on a shared SECRET_KEY setting for some security functions. The secret key should be a randomly generated string at least 64 characters long, which must be shared across all active dashboard instances. Compromise of this key might allow a remote attacker to execute arbitrary code. Rotating this key invalidates existing user sessions and caching. Do not commit this key to public repositories. For director deployments, this setting is managed as the HorizonSecret value. 14.12. Configuring session cookies The Dashboard session cookies can be open to interaction by browser technologies, such as JavaScript. For director deployments with TLS everywhere, you can harden this behavior using the HorizonSecureCookies setting. Note Never configure CSRF or session cookies to use a wildcard domain with a leading dot. 14.13. Validating password complexity The OpenStack Dashboard (horizon) can use a password validation check to enforce password complexity. Procedure Specify a regular expression for password validation, as well as help text to be displayed for failed tests. The following example requires users to create a password of between 8 to 18 characters in length: Apply this change to your deployment. Save the settings as a file called horizon_password.yaml , and then pass it to the overcloud deploy command as follows. The <full environment> indicates that you must still include all of your original deployment parameters. For example: 14.14. Enforce the administrator password check The following setting is set to True by default, however it can be disabled using an environment file, if needed. Note These settings should only be set to False once the potential security impacts are fully understood. Procedure The ENFORCE_PASSWORD_CHECK setting in Dashboard's local_settings.py file displays an Admin Password field on the Change Password form, which helps verify that an administrator is initiating the password change. You can disable ENFORCE_PASSWORD_CHECK using an environment file: 14.15. Disable password reveal The disable_password_reveal parameter is set to True by default, however it can be disabled using an environment file, if needed. The password reveal button allows a user at the Dashboard to view the password they are about to enter. Procedure Under the ControllerExtraConfig parameter, include horizon::disable_password_reveal: false . Save this to a heat environment file and include it with your deployment command. Example Note These settings should only be set to False once the potential security impacts are fully understood. 14.16. Displaying a logon banner for the Dashboard Regulated industries such as HIPAA, PCI-DSS, and the US Government require you to display a user logon banner. The Red Hat OpenStack Platform (RHOSP) dashboard (horizon) uses a default theme (RCUE), which is stored inside the horizon container. Within the custom Dashboard container, you can create a logon banner by manually editing the /usr/share/openstack-dashboard/openstack_dashboard/themes/rcue/templates/auth/login.html file: Procedure Enter the required logon banner just before the {% include 'auth/_login.html' %} section. HTML tags are allowed: The above example produces a dashboard similar to the following: Additional resources Customizing the dashboard 14.17. Limiting the size of file uploads You can optionally configure the dashboard to limit the size of file uploads; this setting might be a requirement for various security hardening policies. LimitRequestBody - This value (in bytes) limits the maximum size of a file that you can transfer using the Dashboard, such as images and other large files. Important This setting has not been formally tested by Red Hat. It is recommended that you thoroughly test the effect of this setting before deploying it to your production environment. Note File uploads will fail if the value is too small. For example, this setting limits each file upload to a maximum size of 10 GB ( 10737418240 ). You will need to adjust this value to suit your deployment. /var/lib/config-data/puppet-generated/horizon/etc/httpd/conf/httpd.conf /var/lib/config-data/puppet-generated/horizon/etc/httpd/conf.d/10-horizon_vhost.conf /var/lib/config-data/puppet-generated/horizon/etc/httpd/conf.d/15-horizon_ssl_vhost.conf Note These configuration files are managed by Puppet, so any unmanaged changes are overwritten whenever you run the openstack overcloud deploy process.
[ "parameter_defaults: HorizonAllowedHosts: <value>", "parameter_defaults: ControllerExtraConfig: horizon::disallow_iframe_embed: false", "horizon::enable_secure_proxy_ssl_header: true", "ssh tripleo-admin@controller-0", "sudo egrep ^SECURE_PROXY_SSL_HEADER /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')", "horizon::cache_backend: django.core.cache.backends.memcached.MemcachedCache horizon::django_session_engine: 'django.contrib.sessions.backends.cache'", "parameter_defaults: HorizonPasswordValidator: '^.{8,18}USD' HorizonPasswordValidatorHelp: 'Password must be between 8 and 18 characters.'", "openstack overcloud deploy --templates -e <full environment> -e horizon_password.yaml", "parameter_defaults: ControllerExtraConfig: horizon::enforce_password_check: false", "parameter_defaults: ControllerExtraConfig: horizon::disable_password_reveal: false", "<snip> <div class=\"container\"> <div class=\"row-fluid\"> <div class=\"span12\"> <div id=\"brand\"> <img src=\"../../static/themes/rcue/images/RHOSP-Login-Logo.svg\"> </div><!--/#brand--> </div><!--/.span*--> <!-- Start of Logon Banner --> <p>Authentication to this information system reflects acceptance of user monitoring agreement.</p> <!-- End of Logon Banner --> {% include 'auth/_login.html' %} </div><!--/.row-fluid-> </div><!--/.container--> {% block js %} {% include \"horizon/_scripts.html\" %} {% endblock %} </body> </html>", "<Directory /> LimitRequestBody 10737418240 </Directory>", "<Directory \"/var/www\"> LimitRequestBody 10737418240 </Directory>", "<Directory \"/var/www\"> LimitRequestBody 10737418240 </Directory>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/hardening_red_hat_openstack_platform/assembly_hardening-the-dashboard-service_security_and_hardening
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 6 - Registering the system and managing subscriptions Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions Revised on 2020-12-03 08:53:12 UTC
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_jon_with_amq_broker/using_your_subscription
Chapter 11. Supported integration with Red Hat products
Chapter 11. Supported integration with Red Hat products AMQ Streams 2.5 supports integration with the following Red Hat products: Red Hat Single Sign-On Provides OAuth 2.0 authentication and OAuth 2.0 authorization. For information on the functionality these products can introduce to your AMQ Streams deployment, refer to the product documentation. 11.1. Red Hat Single Sign-On AMQ Streams supports the use of OAuth 2.0 token-based authorization through Red Hat Single Sign-On Authorization Services , which allows you to manage security policies and permissions centrally. Additional resources Red Hat Single Sign-On Supported Configurations Revised on 2024-09-04 16:24:13 UTC
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/release_notes_for_amq_streams_2.5_on_rhel/supported-config-str
Appendix B. Revision History
Appendix B. Revision History Note that revision numbers relate to the edition of this manual, not to version numbers of Red Hat Directory Server. Version Date and change Author 11.5-1 May 10 2022: Red Hat Directory Server 11.5 release of this guide Marc Muehlfeld 11.4-1 Nov 09 2021: Red Hat Directory Server 11.4 release of this guide Marc Muehlfeld 11.3-1 May 11 2021: Red Hat Directory Server 11.3 release of this guide Marc Muehlfeld 11.2-1 Nov 03 2020: Red Hat Directory Server 11.2 release of this guide Marc Muehlfeld 11.1-1 Apr 28 2020: Red Hat Directory Server 11.1 release of this guide Marc Muehlfeld 11.0-1 Nov 05 2019: Red Hat Directory Server 11.0 release of this guide Marc Muehlfeld
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/installation_guide/revision_history
Chapter 5. Enabling HTTP/2 for the Red Hat JBoss Web Server
Chapter 5. Enabling HTTP/2 for the Red Hat JBoss Web Server The Hypertext Transfer Protocols (HTTP) are standard methods of transmitting data between applications, such as servers and browsers, over the internet. JBoss Web Server supports the use of HTTP/2 for encrypted connections that are using Transport Layer Security (TLS), which is indicated by the h2 keyword when enabled. HTTP/2 improves on HTTP/1.1 by providing the following enhancements: Header compression omits implied information to reduce the size of the header that is transmitted. Multiple requests and responses over a single connection use binary framing rather than textual framing to break down response messages. Note JBoss Web Server does not support the use of HTTP/2 for unencrypted connections that are using the Transmission Control Protocol (TCP), which is indicated by the h2c keyword when enabled. 5.1. Prerequisites You have root user access on Red Hat Enterprise Linux. You have installed Red Hat JBoss Web Server 5.0 or later. You have installed the openssl and apr packages that are provided with Red Hat Enterprise Linux. For more information about installing the openssl and apr packages, see Red Hat Enterprise Linux package requirements . Note These operating system native libraries are also provided by jws-6.0.0-application-server- <platform> - <architecture> .zip where available. If you want to run JSSE+OpenSSL or APR on Red Hat Enterprise Linux version 8 or 9, you must use Tomcat-Native to ensure successful operation. Tomcat-Native is located in the native archive directory. You have configured a connector that supports the HTTP/2 protocol with SSL enabled. For JBoss Web Server 6.0, the following connectors support the HTTP/2 protocol: The NIO connector with JSSE + OpenSSL (JSSE) The NIO2 connector with JSSE + OpenSSL (JSSE) 5.2. Enabling HTTP/2 for a connector In the server.xml file, the upgrade protocol in the connector definition is already set to HTTP/2 by default. Procedure Open the JWS_HOME /tomcat/conf/server.xml configuration file. In the connector definition, ensure that the UpgradeProtocol class name is set to org.apache.coyote.http2.Http2Protocol . For example: <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="150" SSLEnabled="true" maxParameterCount="1000"> <UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" /> <SSLHostConfig> <Certificate certificateKeystoreFile="conf/localhost-rsa.jks" certificateKeystorePassword="changeit" type="RSA" /> </SSLHostConfig> </Connector> To apply any configuration updates, restart the Red Hat JBoss Web Server as the root user. To restart JBoss Web Server on Red Hat Enterprise Linux by using systemd , enter the following command: To restart JBoss Web Server on Red Hat Enterprise Linux by using startup.sh , enter the following commands: To restart JBoss Web Server on Microsoft Windows, enter the following command: 5.3. Viewing JBoss Web Server logs to verify that HTTP/2 is enabled You can view the JBoss Web Server console output log to verify that HTTP/2 is enabled. Prerequisites You have enabled HTTP/2 for a connector . Procedure To view the console output log, enter the following command: Note In the preceding command, replace JWS_HOME with the top-level directory for your JBoss Web Server installation. Verification If HTTP/2 is enabled, the command produces the following type of output that indicates the connector has been configured to support negotiation to [h2] : 5.4. Using the curl command to verify that HTTP/2 is enabled You can use the curl command-line tool to verify that HTTP/2 is enabled. Prerequisites You have enabled HTTP/2 for a connector . You are using a version of curl that supports HTTP/2. To check that you are using a version of curl that supports HTTP/2, enter the following command: This command produces the following type of output: Procedure To check that the HTTP/2 protocol is active, enter the following command: Note In the preceding example, replace <JBoss_Web_Server> with the URI of the modified connector, such as example.com . The port number is dependent on your configuration. Verification If the HTTP/2 protocol is active, the curl command produces the following output: Otherwise, if the HTTP/2 protocol is inactive, the curl command produces the following output: 5.5. Additional resources (or steps) For more information about using HTTP/2, see Apache Tomcat 10 Configuration Reference: The HTTP Connector - HTTP/2 Support . For more information about the HTTP/2 Upgrade Protocol and the supported attributes, see Apache Tomcat 10 Configuration Reference: The HTTP2 Upgrade Protocol . For more information about the proposed internet standard for HTTP/2, see IETF: RFC 7540 - Hypertext Transfer Protocol Version 2 (HTTP/2) .
[ "<Connector port=\"8443\" protocol=\"org.apache.coyote.http11.Http11NioProtocol\" maxThreads=\"150\" SSLEnabled=\"true\" maxParameterCount=\"1000\"> <UpgradeProtocol className=\"org.apache.coyote.http2.Http2Protocol\" /> <SSLHostConfig> <Certificate certificateKeystoreFile=\"conf/localhost-rsa.jks\" certificateKeystorePassword=\"changeit\" type=\"RSA\" /> </SSLHostConfig> </Connector>", "systemctl restart jws6-tomcat.service", "JWS_HOME /sbin/shudown.sh JWS_HOME /sbin/startup.sh", "net restart tomcat10", "cat JWS_HOME /tomcat/logs/catalina.out | grep 'h2'", "06-Apr-2018 04:49:26.201 INFO [main] org.apache.coyote.http11.AbstractHttp11Protocol.configureUpgradeProtocol The [\" connector_name \"] connector has been configured to support negotiation to [h2] via ALPN", "curl -V", "curl 7.55.1 (x86_64-redhat-linux-gnu) Release-Date: 2017-08-14 Protocols: dict file ftp ftps gopher http https Features: AsynchDNS IDN IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy Metalink PSL", "curl -I http:// <JBoss_Web_Server> :8080/", "HTTP/2 200", "HTTP/1.1 200" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/installation_guide/assembly_enabling-http2-for-jws_jboss_web_server_installation_guide
Chapter 1. Authorization APIs
Chapter 1. Authorization APIs 1.1. LocalResourceAccessReview [authorization.openshift.io/v1] Description LocalResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. LocalSubjectAccessReview [authorization.openshift.io/v1] Description LocalSubjectAccessReview is an object for requesting information about whether a user or group can perform an action in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ResourceAccessReview [authorization.openshift.io/v1] Description ResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. SelfSubjectRulesReview [authorization.openshift.io/v1] Description SelfSubjectRulesReview is a resource you can create to determine which actions you can perform in a namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. SubjectAccessReview [authorization.openshift.io/v1] Description SubjectAccessReview is an object for requesting information about whether a user or group can perform an action Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. SubjectRulesReview [authorization.openshift.io/v1] Description SubjectRulesReview is a resource you can create to determine which actions another user can perform in a namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. TokenRequest [authentication.k8s.io/v1] Description TokenRequest requests a token for a given service account. Type object 1.8. TokenReview [authentication.k8s.io/v1] Description TokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver. Type object 1.9. LocalSubjectAccessReview [authorization.k8s.io/v1] Description LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking. Type object 1.10. SelfSubjectAccessReview [authorization.k8s.io/v1] Description SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means "in all namespaces". Self is a special case, because users should always be able to check whether they can perform an action Type object 1.11. SelfSubjectRulesReview [authorization.k8s.io/v1] Description SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server. Type object 1.12. SubjectAccessReview [authorization.k8s.io/v1] Description SubjectAccessReview checks whether or not a user or group can perform an action. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authorization_apis/authorization-apis
Chapter 75. KafkaClientAuthenticationPlain schema reference
Chapter 75. KafkaClientAuthenticationPlain schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationPlain schema properties To configure SASL-based PLAIN authentication, set the type property to plain . SASL PLAIN authentication mechanism requires a username and password. Warning The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled. 75.1. username Specify the username in the username property. 75.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for PLAIN client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. An example SASL based PLAIN client authentication configuration authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name 75.3. KafkaClientAuthenticationPlain schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationPlain type from KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationOAuth . It must have the value plain for the type KafkaClientAuthenticationPlain . Property Description passwordSecret Reference to the Secret which holds the password. PasswordSecretSource type Must be plain . string username Username used for the authentication. string
[ "echo -n PASSWORD > MY-PASSWORD .txt", "create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt", "apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm", "authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaClientAuthenticationPlain-reference
19.2. Mounting a File System
19.2. Mounting a File System To attach a certain file system, use the mount command in the following form: The device can be identified by: a full path to a block device : for example, /dev/sda3 a universally unique identifier ( UUID ): for example, UUID=34795a28-ca6d-4fd8-a347-73671d0c19cb a volume label : for example, LABEL=home Note that while a file system is mounted, the original content of the directory is not accessible. Important Linux does not prevent a user from mounting a file system to a directory with a file system already attached to it. To determine whether a particular directory serves as a mount point, run the findmnt utility with the directory as its argument and verify the exit code: If no file system is attached to the directory, the given command returns 1 . When you run the mount command without all required information, that is without the device name, the target directory, or the file system type, the mount reads the contents of the /etc/fstab file to check if the given file system is listed. The /etc/fstab file contains a list of device names and the directories in which the selected file systems are set to be mounted as well as the file system type and mount options. Therefore, when mounting a file system that is specified in /etc/fstab , you can choose one of the following options: Note that permissions are required to mount the file systems unless the command is run as root (see Section 19.2.2, "Specifying the Mount Options" ). Note To determine the UUID and-if the device uses it-the label of a particular device, use the blkid command in the following form: For example, to display information about /dev/sda3 : 19.2.1. Specifying the File System Type In most cases, mount detects the file system automatically. However, there are certain file systems, such as NFS (Network File System) or CIFS (Common Internet File System), that are not recognized, and need to be specified manually. To specify the file system type, use the mount command in the following form: Table 19.1, "Common File System Types" provides a list of common file system types that can be used with the mount command. For a complete list of all available file system types, see the section called "Manual Page Documentation" . Table 19.1. Common File System Types Type Description ext2 The ext2 file system. ext3 The ext3 file system. ext4 The ext4 file system. btrfs The btrfs file system. xfs The xfs file system. iso9660 The ISO 9660 file system. It is commonly used by optical media, typically CDs. nfs The NFS file system. It is commonly used to access files over the network. nfs4 The NFSv4 file system. It is commonly used to access files over the network. udf The UDF file system. It is commonly used by optical media, typically DVDs. vfat The FAT file system. It is commonly used on machines that are running the Windows operating system, and on certain digital media such as USB flash drives or floppy disks. See Example 19.2, "Mounting a USB Flash Drive" for an example usage. Example 19.2. Mounting a USB Flash Drive Older USB flash drives often use the FAT file system. Assuming that such drive uses the /dev/sdc1 device and that the /media/flashdisk/ directory exists, mount it to this directory by typing the following at a shell prompt as root : 19.2.2. Specifying the Mount Options To specify additional mount options, use the command in the following form: When supplying multiple options, do not insert a space after a comma, or mount interprets incorrectly the values following spaces as additional parameters. Table 19.2, "Common Mount Options" provides a list of common mount options. For a complete list of all available options, consult the relevant manual page as referred to in the section called "Manual Page Documentation" . Table 19.2. Common Mount Options Option Description async Allows the asynchronous input/output operations on the file system. auto Allows the file system to be mounted automatically using the mount -a command. defaults Provides an alias for async,auto,dev,exec,nouser,rw,suid . exec Allows the execution of binary files on the particular file system. loop Mounts an image as a loop device. noauto Default behavior disallows the automatic mount of the file system using the mount -a command. noexec Disallows the execution of binary files on the particular file system. nouser Disallows an ordinary user (that is, other than root ) to mount and unmount the file system. remount Remounts the file system in case it is already mounted. ro Mounts the file system for reading only. rw Mounts the file system for both reading and writing. user Allows an ordinary user (that is, other than root ) to mount and unmount the file system. See Example 19.3, "Mounting an ISO Image" for an example usage. Example 19.3. Mounting an ISO Image An ISO image (or a disk image in general) can be mounted by using the loop device. Assuming that the ISO image of the Fedora 14 installation disc is present in the current working directory and that the /media/cdrom/ directory exists, mount the image to this directory by running the following command: Note that ISO 9660 is by design a read-only file system. 19.2.3. Sharing Mounts Occasionally, certain system administration tasks require access to the same file system from more than one place in the directory tree (for example, when preparing a chroot environment). This is possible, and Linux allows you to mount the same file system to as many directories as necessary. Additionally, the mount command implements the --bind option that provides a means for duplicating certain mounts. Its usage is as follows: Although this command allows a user to access the file system from both places, it does not apply on the file systems that are mounted within the original directory. To include these mounts as well, use the following command: Additionally, to provide as much flexibility as possible, Red Hat Enterprise Linux 7 implements the functionality known as shared subtrees . This feature allows the use of the following four mount types: Shared Mount A shared mount allows the creation of an exact replica of a given mount point. When a mount point is marked as a shared mount, any mount within the original mount point is reflected in it, and vice versa. To change the type of a mount point to a shared mount, type the following at a shell prompt: Alternatively, to change the mount type for the selected mount point and all mount points under it: See Example 19.4, "Creating a Shared Mount Point" for an example usage. Example 19.4. Creating a Shared Mount Point There are two places where other file systems are commonly mounted: the /media/ directory for removable media, and the /mnt/ directory for temporarily mounted file systems. By using a shared mount, you can make these two directories share the same content. To do so, as root , mark the /media/ directory as shared: Create its duplicate in /mnt/ by using the following command: It is now possible to verify that a mount within /media/ also appears in /mnt/ . For example, if the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, run the following commands: Similarly, it is possible to verify that any file system mounted in the /mnt/ directory is reflected in /media/ . For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the /mnt/flashdisk/ directory is present, type: Slave Mount A slave mount allows the creation of a limited duplicate of a given mount point. When a mount point is marked as a slave mount, any mount within the original mount point is reflected in it, but no mount within a slave mount is reflected in its original. To change the type of a mount point to a slave mount, type the following at a shell prompt: Alternatively, it is possible to change the mount type for the selected mount point and all mount points under it by typing: See Example 19.5, "Creating a Slave Mount Point" for an example usage. Example 19.5. Creating a Slave Mount Point This example shows how to get the content of the /media/ directory to appear in /mnt/ as well, but without any mounts in the /mnt/ directory to be reflected in /media/ . As root , first mark the /media/ directory as shared: Then create its duplicate in /mnt/ , but mark it as "slave": Now verify that a mount within /media/ also appears in /mnt/ . For example, if the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, run the following commands: Also verify that file systems mounted in the /mnt/ directory are not reflected in /media/ . For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the /mnt/flashdisk/ directory is present, type: Private Mount A private mount is the default type of mount, and unlike a shared or slave mount, it does not receive or forward any propagation events. To explicitly mark a mount point as a private mount, type the following at a shell prompt: Alternatively, it is possible to change the mount type for the selected mount point and all mount points under it: See Example 19.6, "Creating a Private Mount Point" for an example usage. Example 19.6. Creating a Private Mount Point Taking into account the scenario in Example 19.4, "Creating a Shared Mount Point" , assume that a shared mount point has been previously created by using the following commands as root : To mark the /mnt/ directory as private, type: It is now possible to verify that none of the mounts within /media/ appears in /mnt/ . For example, if the CD-ROM drives contains non-empty media and the /media/cdrom/ directory exists, run the following commands: It is also possible to verify that file systems mounted in the /mnt/ directory are not reflected in /media/ . For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the /mnt/flashdisk/ directory is present, type: Unbindable Mount In order to prevent a given mount point from being duplicated whatsoever, an unbindable mount is used. To change the type of a mount point to an unbindable mount, type the following at a shell prompt: Alternatively, it is possible to change the mount type for the selected mount point and all mount points under it: See Example 19.7, "Creating an Unbindable Mount Point" for an example usage. Example 19.7. Creating an Unbindable Mount Point To prevent the /media/ directory from being shared, as root : This way, any subsequent attempt to make a duplicate of this mount fails with an error: 19.2.4. Moving a Mount Point To change the directory in which a file system is mounted, use the following command: See Example 19.8, "Moving an Existing NFS Mount Point" for an example usage. Example 19.8. Moving an Existing NFS Mount Point An NFS storage contains user directories and is already mounted in /mnt/userdirs/ . As root , move this mount point to /home by using the following command: To verify the mount point has been moved, list the content of both directories: 19.2.5. Setting Read-only Permissions for root Sometimes, you need to mount the root file system with read-only permissions. Example use cases include enhancing security or ensuring data integrity after an unexpected system power-off. 19.2.5.1. Configuring root to Mount with Read-only Permissions on Boot In the /etc/sysconfig/readonly-root file, change READONLY to yes : Change defaults to ro in the root entry ( / ) in the /etc/fstab file: Add ro to the GRUB_CMDLINE_LINUX directive in the /etc/default/grub file and ensure that it does not contain rw : Recreate the GRUB2 configuration file: If you need to add files and directories to be mounted with write permissions in the tmpfs file system, create a text file in the /etc/rwtab.d/ directory and put the configuration there. For example, to mount /etc/example/file with write permissions, add this line to the /etc/rwtab.d/ example file: Important Changes made to files and directories in tmpfs do not persist across boots. See Section 19.2.5.3, "Files and Directories That Retain Write Permissions" for more information on this step. Reboot the system. 19.2.5.2. Remounting root Instantly If root ( / ) was mounted with read-only permissions on system boot, you can remount it with write permissions: This can be particularly useful when / is incorrectly mounted with read-only permissions. To remount / with read-only permissions again, run: Note This command mounts the whole / with read-only permissions. A better approach is to retain write permissions for certain files and directories by copying them into RAM, as described in Section 19.2.5.1, "Configuring root to Mount with Read-only Permissions on Boot" . 19.2.5.3. Files and Directories That Retain Write Permissions For the system to function properly, some files and directories need to retain write permissions. With root in read-only mode, they are mounted in RAM in the tmpfs temporary file system. The default set of such files and directories is read from the /etc/rwtab file, which contains: Entries in the /etc/rwtab file follow this format: A file or directory can be copied to tmpfs in the following three ways: empty path : An empty path is copied to tmpfs . Example: empty /tmp dirs path : A directory tree is copied to tmpfs , empty. Example: dirs /var/run files path : A file or a directory tree is copied to tmpfs intact. Example: files /etc/resolv.conf The same format applies when adding custom paths to /etc/rwtab.d/ .
[ "mount [ option ... ] device directory", "findmnt directory ; echo USD?", "mount [ option ... ] directory mount [ option ... ] device", "blkid device", "blkid /dev/sda3 /dev/sda3: LABEL=\"home\" UUID=\"34795a28-ca6d-4fd8-a347-73671d0c19cb\" TYPE=\"ext3\"", "mount -t type device directory", "~]# mount -t vfat /dev/sdc1 /media/flashdisk", "mount -o options device directory", "mount -o ro,loop Fedora-14-x86_64-Live-Desktop.iso /media/cdrom", "mount --bind old_directory new_directory", "mount --rbind old_directory new_directory", "mount --make-shared mount_point", "mount --make-rshared mount_point", "mount --bind /media /media # mount --make-shared /media", "mount --bind /media /mnt", "mount /dev/cdrom /media/cdrom # ls /media/cdrom EFI GPL isolinux LiveOS # ls /mnt/cdrom EFI GPL isolinux LiveOS", "# mount /dev/sdc1 /mnt/flashdisk # ls /media/flashdisk en-US publican.cfg # ls /mnt/flashdisk en-US publican.cfg", "mount --make-slave mount_point", "mount --make-rslave mount_point", "~]# mount --bind /media /media ~]# mount --make-shared /media", "~]# mount --bind /media /mnt ~]# mount --make-slave /mnt", "~]# mount /dev/cdrom /media/cdrom ~]# ls /media/cdrom EFI GPL isolinux LiveOS ~]# ls /mnt/cdrom EFI GPL isolinux LiveOS", "~]# mount /dev/sdc1 /mnt/flashdisk ~]# ls /media/flashdisk ~]# ls /mnt/flashdisk en-US publican.cfg", "mount --make-private mount_point", "mount --make-rprivate mount_point", "~]# mount --bind /media /media ~]# mount --make-shared /media ~]# mount --bind /media /mnt", "~]# mount --make-private /mnt", "~]# mount /dev/cdrom /media/cdrom ~]# ls /media/cdrom EFI GPL isolinux LiveOS ~]# ls /mnt/cdrom ~]#", "~]# mount /dev/sdc1 /mnt/flashdisk ~]# ls /media/flashdisk ~]# ls /mnt/flashdisk en-US publican.cfg", "mount --make-unbindable mount_point", "mount --make-runbindable mount_point", "mount --bind /media /media # mount --make-unbindable /media", "mount --bind /media /mnt mount: wrong fs type, bad option, bad superblock on /media, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so", "mount --move old_directory new_directory", "mount --move /mnt/userdirs /home", "ls /mnt/userdirs # ls /home jill joe", "Set to 'yes' to mount the file systems as read-only. READONLY=yes [output truncated]", "/dev/mapper/luks-c376919e... / ext4 ro ,x-systemd.device-timeout=0 1 1", "GRUB_CMDLINE_LINUX=\"crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet ro \"", "grub2-mkconfig -o /boot/grub2/grub.cfg", "files /etc/example/file", "mount -o remount,rw /", "mount -o remount,ro /", "dirs /var/cache/man dirs /var/gdm [output truncated] empty /tmp empty /var/cache/foomatic [output truncated] files /etc/adjtime files /etc/ntp.conf [output truncated]", "how the file or directory is copied to tmpfs path to the file or directory" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/sect-using_the_mount_command-mounting
Chapter 3. Supported platforms
Chapter 3. Supported platforms You can find the supported platforms and life cycle dates for both current and past versions of Red Hat Developer Hub on the Life Cycle page .
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/about_red_hat_developer_hub/supported-platforms_about-rhdh
Chapter 3. Integrating OpenStack Key Manager (barbican) with Hardware Security Module (HSM) appliances
Chapter 3. Integrating OpenStack Key Manager (barbican) with Hardware Security Module (HSM) appliances Integrate your Red Hat OpenStack Platform deployment with hardware security module (HSM) appliances to increase your security posture by using hardware based cryptographic processing. When you plan your OpenStack Key Manager integration with an HSM appliance, you must choose a supported encryption type and HSM appliance, set up regular backups, and review any other information or limitations that might affect your deployment. 3.1. Integrating OpenStack Key Manager (barbican) with an Atos HSM To integrate the PKCS#11 back end with your Trustway Proteccio Net HSM appliance, create a configuration file with the parameters to connect barbican with the HSM. You can enable HA by listing two or more HSMs below the atos_hsms parameter. Planning By default, the HSM can have a maximum of 32 concurrent connections. If you exceed this number, you might experience a memory error from the PKCS#11 client. You can calculate the number of connections as follows: Each Controller has one barbican-api and one barbican-worker process. Each Barbican API process is executed with N Apache workers - (where N defaults to the number of CPUs). Each worker has one connection to the HSM. Each barbican-worker process has one connection to the database. You can use the BarbicanWorkers heat parameter to define the number of Apache workers for each API process. By default, the number of Apache workers matches the CPU count. For example, if you have three Controllers, each with 32 cores, then the Barbican API on each Controller uses 32 Apache workers. Consequently, one Controller consumes all 32 HSM connections available. To avoid this contention, limit the number of Barbican Apache workers configured for each node. In this example, set BarbicanWorkers to 10 so that all three Controllers can make ten concurrent connections each to the HSM. Prerequisites A password-protected HTTPS server that provides vendor software for the Atos HSM Table 3.1. Files provided by the HTTPS server File Example Provided by Proteccio Client Software ISO image file Proteccio1.09.05.iso HSM Vendor SSL server certificate proteccio.CRT HSM administrator SSL client certificate client.CRT HSM administrator SSL Client key client.KEY HSM administrator Procedure Create a configure-barbican.yaml environment file for Barbican and add the following parameters: Note The atos_hsms parameter supersedes the parameters atos_hsm_ip_address and atos_server_cert_location which have been deprecated and will be removed in a future release. Table 3.2. Heat parameters Parameter Value BarbicanSimpleCryptoGlobalDefault This is a Boolean that determines if simplecrypto is the global default. BarbicanPkcs11GlobalDefault This is a Boolean that determines if PKCS#11 is the global default. BarbicanPkcs11CryptoSlotId Slot ID for the Virtual HSM to be used by Barbican. ATOSVars atos_client_iso_name The filename for the Atos client software ISO. This value must match the filename in the URL for the atos_client_iso_location parameter. atos_client_iso_location The URL, including the username and password, that specifies the HTTPS server location of the Proteccio Client Software ISO image. atos_client_cert_location The URL, including the username and password, that specifies the HTTPS server location of the SSL client certificate. atos_client_key_location The URL, including the username and password, that specifies the HTTPS server location of the SSL client key. This must be the matching key for the client certificate above. atos_hsms A list of one or more HSMs that specifies the name, certificate location and IP address of the HSM. When you include more than one HSM in this list, Barbican configures the HSMs for load balancing and high availability. Include the custom configure-barbican.yaml , barbican.yaml and ATOS specific barbican-backend-pkcs11-atos.yaml environment files in the deployment command, as well as any other environment files relevant to your deployment: Verification Create a test secret: Retrieve the payload for the secret that you just created: 3.2. Integrating OpenStack Key Manager (barbican) with a Thales Luna Network HSM To integrate the PKCS#11 back end with your Thales Luna Network HSM appliance for hardware based cryptographic processing, use an Ansible role to download and install the Thales Luna client software on the Controller, and create a Key Manager configuration file to include the predefined HSM IP and credentials. Prerequisites A password-protected HTTPS server that provides vendor software for the Thales Luna Network HSM. The vendor provided Luna Network HSM client software in a compressed zip archive. Procedure Install the ansible-role-lunasa-hsm role on the director: Create a configure-barbican.yaml environment file for Key Manager (barbican) and add parameters specific to your environment. Table 3.3. Heat parameters Parameter Value BarbicanSimpleCryptoGlobalDefault This is a Boolean that determines if simplecrypto is the global default. BarbicanPkcs11GlobalDefault This is a Boolean that determines if PKCS#11 is the global default. BarbicanPkcs11CryptoTokenLabel If you have one HSM, then the value of the parameter is the partition Label. If you are using HA between two or more partitions, then this is the label that you want to give to the HA group. BarbicanPkcs11CryptoLogin The PKCS#11 password used to log into the HSM, provided by the HSM administrator. LunasaVar lunasa_client_tarball_name The name of the Luna software tarball. lunasa_client_tarball_location The URL that specifies the HTTPS server location of the Luna Software tarball. lunasa_client_installer_path Path to the install.sh script in the zipped tarball. lunasa_client_rotate_cert (Optional) When set to true, new client certificates will be generated to replace any existing certificates. Default: false lunasa_client_working_dir (Optional) Working directory in the Controller nodes. Default: /tmp/lunasa_client_install lunasa_hsms A list of one or more HSMs that specifies the name, hostname, admin_password, partition, and partition serial number. When you include more than one HSM in this list, Barbican configures the HSMs for high availability. Include the custom configure-barbican.yaml and Thales specific barbican-backend-pkcs11-llunasa.yaml environment files in the deployment command, as well as any other templates relevant for your deployment: 3.3. Integrating OpenStack Key Manager (barbican) with an Entrust nShield Connect XC HSM To integrate the PKCS#11 back end with your Entrust nShield Connect XC HSM, use an Ansible role to download and install the Entrust client software on the Controller, and create a Barbican configuration file to include the predefined HSM IP and credentials. Prerequisites A password-protected HTTPS server that provides vendor software for the Entrust nShield Connect XC. Procedure Create a configure-barbican.yaml environment file for Barbican and add parameters specific to your environment. Use the following snippet as an example: Table 3.4. Heat parameters Parameter Value BarbicanSimpleCryptoGlobalDefault This is a Boolean that determines if simplecrypto is the global default. BarbicanPkcs11GlobalDefault This is a Boolean that determines if PKCS#11 is the global default. BarbicanPkcs11CryptoSlotId Slot ID for the Virtual HSM to be used by Barbican. BarbicanPkcs11CryptoMKEKLabel This parameter defines the name of the mKEK generated in the HSM. Director creates this key in the HSM using this name. BarbicanPkcs11CryptoHMACLabel This parameter defines the name of the HMAC key generated in the HSM. Director creates this key in the HSM using this name. ThalesVars thales_client_working_dir A user-defined temporary working directory. thales_client_tarball_location The URL that specifies the HTTPS server location of the Entrust software. thales_km_data_tarball_name The name of the Entrust software tarball. thales_rfs_key A private key used to obtain an SSH connection to the RFS server. You must add this as an authorized key to the RFS server. Include the custom configure-barbican.yaml environment file, along with the barbican.yaml and Thales specific barbican-backend-pkcs11-thales.yaml environment files, and any other templates needed for you deployment when running the openstack overcloud deploy command: Verification Create a test secret: Retrieve the payload for the secret that you just created: 3.3.1. Load Balancing with Entrust nShield Connect You can now enable load sharing on Entrust nShield Connect HSMs by specifying an array of valid HSMs. When more than one HSMs are listed, load sharing is enabled. This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Procedure When configuring the name and ip parameters for your Entrust nShield Connect HSMs, specifying more than one will enable load sharing: 3.4. Rotating MKEK and HMAC keys You can rotate the MKEK and HMAC keys using a director update. Note Due to a limitation in Barbican, the MKEK and HMAC have the same key type. Procedure Add the following parameter to your deployment environment files: Change the labels on the MKEK and HMAC keys For example, if your labels are similar to these: You can change the labels by incrementing the values: Note Do not change the HMAC key type. Re-deploy using director to apply the update. Director checks whether the keys that are labelled for the MKEK and HMAC exist, and then creates them. In addition, with the BarbicanPkcs11CryptoRewrapKeys parameter set to True , director calls barbican-manage hsm pkek_rewrap to rewrap all existing pKEKs.
[ "parameter_defaults BarbicanSimpleCryptoGlobalDefault: false BarbicanPkcs11CryptoGlobalDefault: true BarbicanPkcs11CryptoLogin: ******** BarbicanPkcs11CryptoSlotId: 1 ATOSVars: atos_client_iso_name: Proteccio1.09.05.iso atos_client_iso_location: https://user@PASSWORD:example.com/Proteccio1.09.05.iso atos_client_cert_location: https://user@PASSWORD:example.com/client.CRT atos_client_key_location: https://user@PASSWORD:example.com/client.KEY atos_hsms: - name: myHsm1 server_cert_location: https://user@PASSWORD:example.com/myHsm1.CRT ip: 192.168.1.101 - name: myHsm2 server_cert_location: https://user@PASSWORD:example.com/myHsm2.CRT ip: ip: 192.168.1.102", "openstack overcloud deploy --timeout 100 --templates /usr/share/openstack-tripleo-heat-templates --stack overcloud --libvirt-type kvm --ntp-server clock.redhat.com -e /home/stack/containers-prepare-parameter.yaml -e /home/stack/templates/config_lvm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/templates/network/network-environment.yaml -e /home/stack/templates/hostnames.yml -e /home/stack/templates/nodes_data.yaml -e /home/stack/templates/extra_templates.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-pkcs11-atos.yaml -e /home/stack/templates/configure-barbican.yaml --log-file overcloud_deployment_with_atos.log", "openstack secret store --name testSecret --payload 'TestPayload' +---------------+------------------------------------------------------------------------------------+ | Field | Value | +---------------+------------------------------------------------------------------------------------+ | Secret href | https://192.168.123.163/key-manager/v1/secrets/4cc5ffe0-eea2-449d-9e64-b664d574be53 | | Name | testSecret | | Created | None | | Status | None | | Content types | None | | Algorithm | aes | | Bit length | 256 | | Secret type | opaque | | Mode | cbc | | Expiration | None | +---------------+------------------------------------------------------------------------------------+", "openstack secret get https://192.168.123.163/key-manager/v1/secrets/4cc5ffe0-eea2-449d-9e64-b664d574be53 --payload +---------+-------------+ | Field | Value | +---------+-------------+ | Payload | TestPayload | +---------+-------------+", "sudo dnf install ansible-role-lunasa-hsm", "parameter_defaults: BarbicanPkcs11CryptoMKEKLabel: \"barbican_mkek_0\" BarbicanPkcs11CryptoHMACLabel: \"barbican_hmac_0\" BarbicanPkcs11CryptoLogin: \"USDPKCS_11_USER_PIN\" BarbicanPkcs11CryptoGlobalDefault: true LunasaVars: lunasa_client_tarball_name: 610-012382-014_SW_Client_HSM_6.2_RevA.tar.zip lunasa_client_tarball_location: https://user:[email protected]/luna_software/610-012382-014_SW_Client_HSM_6.2_RevA.tar.zip lunasa_client_installer_path: 610-012382-014_SW_Client_HSM_6.2_RevA/linux/64/install.sh lunasa_hsms: - hostname: luna-hsm.example.com admin_password: \"USDHSM_ADMIN_PASSWORD\" partition: myPartition1 partition_serial: 123456789", "openstack overcloud deploy --templates . -e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-pkcs11-lunasa.yaml -e /home/stack/templates/configure-barbican.yaml --log-file overcloud_deployment_with_luna.log", "parameter_defaults: VerifyGlanceSignatures: true SwiftEncryptionEnabled: true BarbicanPkcs11CryptoLogin: 'sample string' BarbicanPkcs11CryptoSlotId: '492971158' BarbicanPkcs11CryptoGlobalDefault: true BarbicanPkcs11CryptoLibraryPath: '/opt/nfast/toolkits/pkcs11/libcknfast.so' BarbicanPkcs11CryptoEncryptionMechanism: 'CKM_AES_CBC' BarbicanPkcs11CryptoHMACKeyType: 'CKK_SHA256_HMAC' BarbicanPkcs11CryptoHMACKeygenMechanism: 'CKM_NC_SHA256_HMAC_KEY_GEN' BarbicanPkcs11CryptoMKEKLabel: 'barbican_mkek_10' BarbicanPkcs11CryptoMKEKLength: '32' BarbicanPkcs11CryptoHMACLabel: 'barbican_hmac_10' BarbicanPkcs11CryptoThalesEnabled: true BarbicanPkcs11CryptoEnabled: true ThalesVars: thales_client_working_dir: /tmp/thales_client_install thales_client_tarball_location: https://your server/CipherTools-linux64-dev-12.40.2.tgz thales_client_tarball_name: CipherTools-linux64-dev-12.40.2.tgz thales_client_path: linux/libc6_11/amd64/nfast thales_client_uid: 42481 thales_client_gid: 42481 thales_km_data_location: https://your server/kmdata_post_card_creation.tar.gz thales_km_data_tarball_name: kmdata_post_card_creation.tar.gz thales_rfs_server_ip_address: 192.168.10.12 thales_hsm_config_location: hsm-C90E-02E0-D947 nShield_hsms: - name: hsm-name.example.com ip: 192.168.10.10 thales_rfs_user: root thales_rfs_key: | -----BEGIN RSA PRIVATE KEY----- Sample private key -----END RSA PRIVATE KEY----- resource_registry: OS::TripleO::Services::BarbicanBackendPkcs11Crypto: /home/stack/tripleo-heat-templates/puppet/services/barbican-backend-pkcs11-crypto.yaml", "openstack overcloud deploy --timeout 100 --templates /usr/share/openstack-tripleo-heat-templates --stack overcloud --libvirt-type kvm --ntp-server clock.redhat.com -e /home/stack/containers-prepare-parameter.yaml -e /home/stack/templates/config_lvm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/templates/network/network-environment.yaml -e /home/stack/templates/hostnames.yml -e /home/stack/templates/nodes_data.yaml -e /home/stack/templates/extra_templates.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-pkcs11-thales.yaml -e /home/stack/templates/configure-barbican.yaml --log-file overcloud_deployment_with_atos.log", "openstack secret store --name testSecret --payload 'TestPayload' +---------------+------------------------------------------------------------------------------------+ | Field | Value | +---------------+------------------------------------------------------------------------------------+ | Secret href | https://192.168.123.163/key-manager/v1/secrets/4cc5ffe0-eea2-449d-9e64-b664d574be53 | | Name | testSecret | | Created | None | | Status | None | | Content types | None | | Algorithm | aes | | Bit length | 256 | | Secret type | opaque | | Mode | cbc | | Expiration | None | +---------------+------------------------------------------------------------------------------------+", "openstack secret get https://192.168.123.163/key-manager/v1/secrets/4cc5ffe0-eea2-449d-9e64-b664d574be53 --payload +---------+-------------+ | Field | Value | +---------+-------------+ | Payload | TestPayload | +---------+-------------+", "parameter_defaults: . ThalesVars: . nshield_hsms: - name: hsm-name1.example.com ip: 192.168.10.10 - name: hsm-nam2.example.com ip: 192.168.10.11 .", "BarbicanPkcs11CryptoRewrapKeys: true", "BarbicanPkcs11CryptoMKEKLabel: 'barbican_mkek_10' BarbicanPkcs11CryptoHMACLabel: 'barbican_hmac_10'", "BarbicanPkcs11CryptoMKEKLabel: 'barbican_mkek_11' BarbicanPkcs11CryptoHMACLabel: 'barbican_hmac_11'" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_secrets_with_the_key_manager_service/assembly-integrating-key-manager-hsm_rhosp
3.6. Displaying LVM Information with the lvm Command
3.6. Displaying LVM Information with the lvm Command The lvm command provides several built-in options that you can use to display information about LVM support and configuration. lvm devtypes Displays the recognized build-in block device types (Red Hat Enterprise Linux release 6.6 and later). lvm formats Displays recognized metadata formats. lvm help Displays LVM help text. lvm segtypes Displays recognized logical volume segment types. lvm tags Displays any tags defined on this host. For information on LVM object tags, see Appendix D, LVM Object Tags . lvm version Displays the current version information.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/lvmdisplaycommand
Chapter 5. Mirroring images for a disconnected installation by using the oc-mirror plugin v2
Chapter 5. Mirroring images for a disconnected installation by using the oc-mirror plugin v2 You can run your cluster in a restricted network without direct internet connectivity if you install the cluster from a mirrored set of OpenShift Container Platform container images in a private registry. This registry must be running whenever your cluster is running. Just as you can use the oc-mirror OpenShift CLI ( oc ) plugin, you can also use oc-mirror plugin v2 to mirror images to a mirror registry in your fully or partially disconnected environments. To download the required images from the official Red Hat registries, you must run oc-mirror plugin v2 from a system with internet connectivity. Important oc-mirror plugin v2 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.1. Prerequisites You must have a container image registry that supports Docker V2-2 in the location that hosts the OpenShift Container Platform cluster, such as Red Hat Quay. Note If you use Red Hat Quay, use version 3.6 or later with the oc-mirror plugin. See the documentation on Deploying the Red Hat Quay Operator on OpenShift Container Platform (Red Hat Quay documentation) . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support. If you do not have an existing solution for a container image registry, OpenShift Container Platform subscribers receive a mirror registry for Red Hat OpenShift. This mirror registry is included with your subscription and serves as a small-scale container registry. You can use this registry to mirror the necessary container images of OpenShift Container Platform for disconnected installations. Every machine in the provisioned clusters must have access to the mirror registry. If the registry is unreachable, tasks like installation, updating, or routine operations such as workload relocation, might fail. Mirror registries must be operated in a highly available manner, ensuring their availability aligns with the production availability of your OpenShift Container Platform clusters. High level workflow The following steps outline the high-level workflow on how to mirror images to a mirror registry by using the oc-mirror plugin v2: Create an image set configuration file. Mirror the image set to the target mirror registry by using one of the following workflows: Mirror an image set directly to the target mirror registry (mirror to mirror). Mirror an image set to disk (Mirror-to-Disk), transfer the tar file to the target environment, then mirror the image set to the target mirror registry (Disk-to-Mirror). Configure your cluster to use the resources generated by the oc-mirror plugin v2. Repeat these steps to update your target mirror registry as necessary. 5.2. About oc-mirror plugin v2 The oc-mirror OpenShift CLI ( oc ) plugin is a single tool that mirrors all required OpenShift Container Platform content and other images to your mirror registry. To use the new Technology Preview version of oc-mirror, add the --v2 flag to the oc-mirror plugin v2 command line. oc-mirror plugin v2 has the following features: Verifies that the complete image set specified in the image set config is mirrored to the mirrored registry, regardless of whether the images were previously mirrored or not. Uses a cache system instead of metadata. Maintains minimal archive sizes by incorporating only new images into the archive. Generates mirroring archives with content selected by mirroring date. Can generate ImageDigestMirrorSet (IDMS), ImageTagMirrorSet (ITMS), instead of ImageContentSourcePolicy (ICSP) for the full image set, rather than just for the incremental changes. Saves filter Operator versions by bundle name. Does not perform automatic pruning. V2 now has a Delete feature, which grants users more control over deleting images. Introduces support for registries.conf . This change facilitates mirroring to multiple enclaves while using the same cache. 5.2.1. oc-mirror plugin v2 compatibility and support The oc-mirror plugin v2 is supported for OpenShift Container Platform. Note On aarch64 , ppc64le , and s390x architectures the oc-mirror plugin v2 is supported only for OpenShift Container Platform versions 4.14 and later. Use the latest available version of the oc-mirror plugin v2 regardless of which versions of OpenShift Container Platform you need to mirror. 5.3. Preparing your mirror hosts To use the oc-mirror plugin v2 for image mirroring, you need to install the plugin and create a file with credentials for container images, enabling you to mirror from Red Hat to your mirror. 5.3.1. Installing the oc-mirror OpenShift CLI plugin Install the oc-mirror OpenShift CLI plugin to manage image sets in disconnected environments. Prerequisites You have installed the OpenShift CLI ( oc ). If you are mirroring image sets in a fully disconnected environment, ensure the following: You have installed the oc-mirror plugin on the host that has internet access. The host in the disconnected environment has access to the target mirror registry. You have set the umask parameter to 0022 on the operating system that uses oc-mirror. You have installed the correct binary for the RHEL version that you are using. Procedure Download the oc-mirror CLI plugin. Navigate to the Downloads page of the OpenShift Cluster Manager . Under the OpenShift disconnected installation tools section, click Download for OpenShift Client (oc) mirror plugin and save the file. Extract the archive: USD tar xvzf oc-mirror.tar.gz If necessary, update the plugin file to be executable: USD chmod +x oc-mirror Note Do not rename the oc-mirror file. Install the oc-mirror CLI plugin by placing the file in your PATH , for example, /usr/local/bin : USD sudo mv oc-mirror /usr/local/bin/. Verification Verify that the plugin for oc-mirror v2 is successfully installed by running the following command: USD oc mirror --v2 --help 5.3.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that enables you to mirror images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Warning This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret. Prerequisites You configured a mirror registry to use in your disconnected environment. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Save the file as USDXDG_RUNTIME_DIR/containers/auth.json . Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 Specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 Specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 5.4. Mirroring an image set to a mirror registry Mirroring an image set to a mirror registry ensures that the required images are available in a secure and controlled environment, facilitating smoother deployments, updates, and maintenance tasks. 5.4.1. Building the image set configuration The oc-mirror plugin v2 uses the image set configuration as the input file to determine the required images for mirroring. Example for the ImageSetConfiguration input file kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v2alpha1 mirror: platform: channels: - name: stable-4.13 minVersion: 4.13.10 maxVersion: 4.13.10 graph: true operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: aws-load-balancer-operator - name: 3scale-operator - name: node-observability-operator additionalImages: - name: registry.redhat.io/ubi8/ubi:latest - name: registry.redhat.io/ubi9/ubi@sha256:20f695d2a91352d4eaa25107535126727b5945bff38ed36a3e59590f495046f0 5.4.2. Mirroring an image set in a partially disconnected environment You can mirror image sets to a registry using the oc-mirror plugin v2 in environments with restricted internet access. Prerequisites You have access to the internet and the mirror registry in the environment where you are running the oc-mirror plugin v2. Procedure Mirror the images from the specified image set configuration to a specified registry by running the following command: USD oc mirror -c isc.yaml --workspace file://<file_path> docker://<mirror_registry_url> --v2 1 1 Specify the URL or address of the mirror registry where the images are stored and from which they need to be deleted. Verification Navigate to the cluster-resources directory within the working-dir directory that was generated in the <file_path> directory. Verify that the YAML files are present for the ImageDigestMirrorSet , ImageTagMirrorSet and CatalogSource resources. steps Configure your cluster to use the resources generated by oc-mirror plugin v2. 5.4.3. Mirroring an image set in a fully disconnected environment You can mirror image sets in a fully disconnected environment where the OpenShift Container Platform cluster cannot access the public internet. Mirror to disk : Prepare an archive containing the image set for mirroring. Internet access is required. Manual step : Transfer the archive to the network of the disconnected mirror registry. Disk to mirror : To mirror the image set from the archive to the target disconnected registry, run oc-mirror plugin v2 from the environment that has access to the mirror registry. 5.4.3.1. Mirroring from mirror to disk You can use the oc-mirror plugin v2 to generate an image set and save the content to disk. You can then transfer the generated image set can to the disconnected environment and mirrored to the target registry. oc-mirror plugin v2 retrieves the container images from the source specified in the image set configuration and packs them into a tar archive in a local directory. Procedure Mirror the images from the specified image set configuration to the disk by running the following command: USD oc mirror -c isc.yaml file://<file_path> --v2 1 1 Add the required file path. Verification Navigate to the <file_path> directory that was generated. Verify that the archive files have been generated. steps Configure your cluster to use the resources generated by oc-mirror plugin v2. 5.4.3.2. Mirroring from disk to mirror You can use the oc-mirror plugin v2 to mirror image sets from a disk to a target mirror registry. The oc-mirror plugin v2 retrieves container images from a local disk and transfers them to the specified mirror registry. Procedure Process the image set file on the disk and mirror the contents to a target mirror registry by running the following command: USD oc mirror -c isc.yaml --from file://<file_path> docker://<mirror_registry_url> --v2 1 1 Specify the URL or address of the mirror registry where the images are stored and from which they need to be deleted. Verification Navigate to the cluster-resources directory within the working-dir directory that was generated in the <file_path> directory. Verify that the YAML files are present for the ImageDigestMirrorSet , ImageTagMirrorSet and CatalogSource resources. steps Configure your cluster to use the resources generated by oc-mirror plugin v2. 5.5. Additional resources Updating a cluster in a disconnected environment using the OpenShift Update Service . 5.6. About custom resources generated by v2 With oc-mirror plugin v2, ImageDigestMirrorSet (IDMS) and ImageTagMirrorSet (ITMS) are generated by default if at least one image is found to which a tag refers. These sets contain mirrors for images referenced by digest or tag in releases, Operator catalogs and additional images. The ImageDigestMirrorSet (IDMS) links the mirror registry to the source registry and forwards image pull requests using digest specifications. The ImagetagMirrorSet (ITMS) resource, however, redirects image pull requests by using image tags. Operator Lifecycle Manager (OLM) uses the CatalogSource resource to retrieve information about the available Operators in the mirror registry. The OSUS service uses the UpdateService resource to provide Cincinnati graph to the disconnected environment. 5.6.1. Configuring your cluster to use the resources generated by oc-mirror plugin v2 After you have mirrored your image set to the mirror registry, you must apply the generated ImageDigestMirrorSet (IDMS), ImageTagMirrorSet (ITMS), CatalogSource , and UpdateService to the cluster. Important In oc-mirror plugin v2, the IDMS and ITMS files cover the entire image set, unlike the ICSP files in oc-mirror plugin v1. Therefore, the IDMS and ITMS files contain all images of the set even if you only add new images during incremental mirroring. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Apply the YAML files from the results directory to the cluster by running the following command: USD oc apply -f <path_to_oc-mirror_workspace>/working-dir/cluster-resources Verification Verify that the ImageDigestMirrorSet resources are successfully installed by running the following command: USD oc get imagedigestmirrorset Verify that the ImageTagMirrorSet resources are successfully installed by running the following command: USD oc get imagetagmirrorset Verify that the CatalogSource resources are successfully installed by running the following command: USD oc get catalogsource -n openshift-marketplace 5.7. Deletion of images from your disconnected environment Before you can use oc-mirror plugin v2, you must delete previously deployed images. oc-mirror plugin v2 no longer performs automatic pruning. You must create the DeleteImageSetConfiguration file to delete image configuration when using oc-mirror plugin v2. This prevents accidentally deleting necessary or deployed images when making changes with ImageSetConfig.yaml . In the following example, DeleteImageSetConfiguration removes the following: All images of OpenShift Container Platform release 4.13.3. The catalog image redhat-operator-index v4.12 . The aws-load-balancer-operator v0.0.1 bundle and all its related images. The additional images ubi and ubi-minimal referenced by their corresponding digests. Example: DeleteImageSetConfig apiVersion: mirror.openshift.io/v2alpha1 kind: DeleteImageSetConfiguration delete: platform: channels: - name: stable-4.13 minVersion: 4.13.3 maxVersion: 4.13.3 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: aws-load-balancer-operator minVersion: 0.0.1 maxVersion: 0.0.1 additionalImages: - name: registry.redhat.io/ubi8/ubi@sha256:bce7e9f69fb7d4533447232478fd825811c760288f87a35699f9c8f030f2c1a6 - name: registry.redhat.io/ubi8/ubi-minimal@sha256:8bedbe742f140108897fb3532068e8316900d9814f399d676ac78b46e740e34e Important Consider using the mirror-to-disk and disk-to-mirror workflows to reduce mirroring issues. In the image delete workflow, oc-mirror plugin v2 deletes only the manifests of the images, which does not reduce the storage occupied in the registry. To free up storage space from unnecessary images, such as those with deleted manifests, you must enable the garbage collector on your container registry. With the garbage collector enabled, the registry will delete the image blobs that no longer have references to any manifests, thereby reducing the storage previously occupied by the deleted blobs. Enabling the garbage collector differs depending on your container registry. Important To skip deleting the Operator catalog image when deleting images, you must list the specific Operators under the Operator catalog image in the DeleteImageSetConfiguration file. This ensures that only the specified Operators are deleted, not the catalog image. If only the Operator catalog image is specified, all Operators within that catalog, as well as the catalog image itself, will be deleted. 5.7.1. Deleting the images from disconnected environment To delete images from a disconnected environment using the oc-mirror plugin v2, follow the procedure. Procedure Create a YAML file that deletes images: USD oc mirror delete --config delete-image-set-config.yaml --workspace file://<previously_mirrored_work_folder> --v2 --generate docker://<remote_registry> Where: <previously_mirrored_work_folder> : Use the directory where images were previously mirrored or stored during the mirroring process. <remote_registry> : Insert the URL or address of the remote container registry from which images will be deleted. Go to the <previously_mirrored_work_folder>/delete directory that was created. Verify that the delete-images.yaml file has been generated. Manually ensure that each image listed in the file is no longer needed by the cluster and can be safely removed from the registry. After you generate the delete YAML file, delete the images from the remote registry: USD oc mirror delete --v2 --delete-yaml-file <previously_mirrored_work_folder>/delete/delete-images.yaml docker:/ <remote_registry> Where: <previously_mirrored_work_folder> : Specify your previously mirrored work folder. Important When using the mirror-to-mirror procedure, images are not cached locally, so you cannot delete images from a local cache. 5.8. Verifying your selected images for mirroring You can use oc-mirror plugin v2 to perform a test run (dry run) that does not actually mirror any images. This enables you to review the list of images that would be mirrored. You can also use a dry run to catch any errors with your image set configuration early. When running a dry run on a mirror-to-disk workflow, the oc-mirror plugin v2 checks if all the images within the image set are available in its cache. Any missing images are listed in the missing.txt file. When a dry run is performed before mirroring, both missing.txt and mapping.txt files contain the same list of images. 5.8.1. Performing dry run for oc-mirror plugin v2 Verify your image set configuration by performing a dry run without mirroring any images. This ensures your setup is correct and prevents unintended changes. Procedure To perform a test run, run the oc mirror command and append the --dry-run argument to the command: USD oc mirror -c <image_set_config_yaml> --from file://<oc_mirror_workspace_path> docker://<mirror_registry_url> --dry-run --v2 Where: <image_set_config_yaml> : Use the image set configuration file that you just created. <oc_mirror_workspace_path> : Insert the address of the workspace path. <mirror_registry_url> : Insert the URL or address of the remote container registry from which images will be deleted. Example output USD oc mirror --config /tmp/isc_dryrun.yaml file://<oc_mirror_workspace_path> --dry-run --v2 [INFO] : :warning: --v2 flag identified, flow redirected to the oc-mirror v2 version. This is Tech Preview, it is still under development and it is not production ready. [INFO] : :wave: Hello, welcome to oc-mirror [INFO] : :gear: setting up the environment for you... [INFO] : :twisted_rightwards_arrows: workflow mode: mirrorToDisk [INFO] : :sleuth_or_spy: going to discover the necessary images... [INFO] : :mag: collecting release images... [INFO] : :mag: collecting operator images... [INFO] : :mag: collecting additional images... [WARN] : :warning: 54/54 images necessary for mirroring are not available in the cache. [WARN] : List of missing images in : CLID-19/working-dir/dry-run/missing.txt. please re-run the mirror to disk process [INFO] : :page_facing_up: list of all images for mirroring in : CLID-19/working-dir/dry-run/mapping.txt [INFO] : mirror time : 9.641091076s [INFO] : :wave: Goodbye, thank you for using oc-mirror Verification Navigate to the workspace directory that was generated: USD cd <oc_mirror_workspace_path> Review the mapping.txt and missing.txt files that were generated. These files contain a list of all images that would be mirrored. 5.8.2. Troubleshooting oc-mirror plugin v2 errors oc-mirror plugin v2 now logs all image mirroring errors in a separate file, making it easier to track and diagnose failures. Important When errors occur while mirroring release or release component images, they are critical. This stops the mirroring process immediately. Errors with mirroring Operators, Operator-related images, or additional images do not stop the mirroring process. Mirroring continues, and oc-mirror plugin v2 logs updates every 8 images. When an image fails to mirror, and that image is mirrored as part of one or more Operator bundles, oc-mirror plugin v2 notifies the user which Operators are incomplete, providing clarity on the Operator bundles affected by the error. Procedure Check for server-related issues: Example error [ERROR] : [Worker] error mirroring image localhost:55000/openshift/graph-image:latest error: copying image 1/4 from manifest list: trying to reuse blob sha256:edab65b863aead24e3ed77cea194b6562143049a9307cd48f86b542db9eecb6e at destination: pinging container registry localhost:5000: Get "https://localhost:5000/v2/": http: server gave HTTP response to HTTPS client Open the mirroring_error_date_time.log file in the working-dir/logs folder located in the oc-mirror plugin v2 output directory. Look for error messages that typically indicate server-side issues, such as HTTP 500 errors, expired tokens, or timeouts. Retry the mirroring process or contact support if the issue persists. Check for incomplete mirroring of Operators: Example error error mirroring image docker://registry.redhat.io/3scale-amp2/zync-rhel9@sha256:8bb6b31e108d67476cc62622f20ff8db34efae5d58014de9502336fcc479d86d (Operator bundles: [3scale-operator.v0.11.12] - Operators: [3scale-operator]) error: initializing source docker://localhost:55000/3scale-amp2/zync-rhel9:8bb6b31e108d67476cc62622f20ff8db34efae5d58014de9502336fcc479d86d: reading manifest 8bb6b31e108d67476cc62622f20ff8db34efae5d58014de9502336fcc479d86d in localhost:55000/3scale-amp2/zync-rhel9: manifest unknown error mirroring image docker://registry.redhat.io/3scale-amp2/3scale-rhel7-operator-metadata@sha256:de0a70d1263a6a596d28bf376158056631afd0b6159865008a7263a8e9bf0c7d error: skipping operator bundle docker://registry.redhat.io/3scale-amp2/3scale-rhel7-operator-metadata@sha256:de0a70d1263a6a596d28bf376158056631afd0b6159865008a7263a8e9bf0c7d because one of its related images failed to mirror error mirroring image docker://registry.redhat.io/3scale-amp2/system-rhel7@sha256:fe77272021867cc6b6d5d0c9bd06c99d4024ad53f1ab94ec0ab69d0fda74588e (Operator bundles: [3scale-operator.v0.11.12] - Operators: [3scale-operator]) error: initializing source docker://localhost:55000/3scale-amp2/system-rhel7:fe77272021867cc6b6d5d0c9bd06c99d4024ad53f1ab94ec0ab69d0fda74588e: reading manifest fe77272021867cc6b6d5d0c9bd06c99d4024ad53f1ab94ec0ab69d0fda74588e in localhost:55000/3scale-amp2/system-rhel7: manifest unknown Check for warnings in the console or log file indicating which Operators are incomplete. If an Operator is flagged as incomplete, the image related to that Operator likely failed to mirror. Manually mirror the missing image or retry the mirroring process. Check for errors related to generated cluster resources. Even if some images fail to mirror, oc-mirror v2 will still generate cluster resources such as IDMS.yaml and ITMS.yaml files for the successfully mirrored images. Check the output directory for the generated files. If these files are missing for specific images, ensure that no critical errors occurred for those images during the mirroring process. By following these steps, you can better diagnose issues and ensure smoother mirroring. 5.9. Benefits of enclave support Enclave support restricts internal access to a specific part of a network. Unlike a demilitarized zone (DMZ) network, which allows inbound and outbound traffic access through firewall boundaries, enclaves do not cross firewall boundaries. Important Enclave Support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The new enclave support functionality is for scenarios where mirroring is needed for multiple enclaves that are secured behind at least one intermediate disconnected network. Enclave support has the following benefits: You can mirror content for multiple enclaves and centralize it in a single internal registry. Because some customers want to run security checks on the mirrored content, with this setup they can run these checks all at once. The content is then vetted before being mirrored to downstream enclaves. You can mirror content directly from the centralized internal registry to enclaves without restarting the mirroring process from the internet for each enclave. You can minimize data transfer between network stages, so to ensure that a blob or image is transferred only once from one stage to another. 5.9.1. Enclave mirroring workflow The image outlines the flow for using the oc-mirror plugin in different environments, including environments with and without an internet connection. Environment with Internet Connection : The user executes oc-mirror plugin v2 to mirror content from an online registry to a local disk directory. The mirrored content is saved to the disk for transfer to offline environments. Disconnected Enterprise Environment (No Internet) : Flow 1: The user runs oc-mirror plugin v2 to load the mirrored content from the disk directory, which was transferred from the online environment, into the enterprise-registry.in registry. Flow 2: After updating the registries.conf file, the user executes the oc-mirror plugin v2 to mirror content from the enterprise-registry.in registry to an enclave environment. The content is saved to a disk directory for transfer to the enclave. Enclave Environment (No Internet) : The user runs oc-mirror plugin v2 to load content from the disk directory into the enclave-registry.in registry. The image visually represents the data flow across these environments and emphasizes the use of oc-mirror to handle disconnected and enclave environments without an internet connection. 5.9.2. Mirroring to an enclave When you mirror to an enclave, you must first transfer the necessary images from one or more enclaves into the enterprise central registry. The central registry is situated within a secure network, specifically a disconnected environment, and is not directly linked to the public internet. But the user must execute oc mirror in an environment with access to the public internet. Procedure Before running oc-mirror plugin v2 in the disconnected environment, create a registries.conf file. The TOML format of the file is described in this specification: Note It is recommended to store the file under USDHOME/.config/containers/registries.conf or /etc/containers/registries.conf . Example registries.conf [[registry]] location="registry.redhat.io" [[registry.mirror]] location="<enterprise-registry.in>" [[registry]] location="quay.io" [[registry.mirror]] location="<enterprise-registry.in>" Generate a mirror archive. To collect all the OpenShift Container Platform content into an archive on the disk under <file_path>/enterprise-content , run the following command: USD oc mirror --v2 -c isc.yaml file://<file_path>/enterprise-content Example of isc.yaml apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: architectures: - "amd64" channels: - name: stable-4.15 minVersion: 4.15.0 maxVersion: 4.15.3 After the archive is generated, it is transferred to the disconnected environment. The transport mechanism is not part of oc-mirror plugin v2. The enterprise network administrators determine the transfer strategy. In some cases, the transfer is done manually, in that the disk is physically unplugged from one location, and plugged to another computer in the disconnected environment. In other cases, the Secure File Transfer Protocol (SFTP) or other protocols are used. After the transfer of the archive is done, you can execute oc-mirror plugin v2 again in order to mirror the relevant archive contents to the registry ( entrerpise_registry.in in the example) as demonstrated in the following example: USD oc mirror --v2 -c isc.yaml --from file://<disconnected_environment_file_path>/enterprise-content docker://<enterprise_registry.in>/ Where: --from points to the folder containing the archive. It starts with the file:// . docker:// is the destination of the mirroring is the final argument. Because it is a docker registry. -c ( --config ) is a mandatory argument. It enables oc-mirror plugin v2 to eventually mirror only sub-parts of the archive to the registry. One archive might contain several OpenShift Container Platform releases, but the disconnected environment or an enclave might mirror only a few. Prepare the imageSetConfig YAML file, which describes the content to mirror to the enclave: Example isc-enclave.yaml apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: architectures: - "amd64" channels: - name: stable-4.15 minVersion: 4.15.2 maxVersion: 4.15.2 You must run oc-mirror plugin v2 on a machine with access to the disconnected registry. In the example, the disconnected environment, enterprise-registry.in , is accessible. Update the graph URL If you are using graph:true , oc-mirror plugin v2 attempts to reach the cincinnati API endpoint. Because this environment is disconnected, be sure to export the environment variable UPDATE_URL_OVERRIDE to refer to the URL for the OpenShift Update Service (OSUS): USD export UPDATE_URL_OVERRIDE=https://<osus.enterprise.in>/graph For more information on setting up OSUS on an OpenShift cluster, see "Updating a cluster in a disconnected environment using the OpenShift Update Service". Note When upgrading OpenShift Container Platform Extended Update Support (EUS) versions, an intermediate version might be required between the current and target versions. For example, if the current version is 4.14 and target version is 4.16 , you might need to include a version such as 4.15.8 in the ImageSetConfiguration when using the oc-mirror plugin v2. The oc-mirror plugin v2 might not always detect this automatically, so check the Cincinnati graph web page to confirm any required intermediate versions and add them manually to your configuration. Generate a mirror archive from the enterprise registry for the enclave. To prepare an archive for the enclave1 , the user executes oc-mirror plugin v2 in the enterprise disconnected environment by using the imageSetConfiguration specific for that enclave. This ensures that only images needed by that enclave are mirrored: USD oc mirror --v2 -c isc-enclave.yaml file:///disk-enc1/ This action collects all the OpenShift Container Platform content into an archive and generates an archive on disk. After the archive is generated, it will be transferred to the enclave1 network. The transport mechanism is not the responsibility of oc-mirror plugin v2. Mirror contents to the enclave registry After the transfer of the archive is done, the user can execute oc-mirror plugin v2 again in order to mirror the relevant archive contents to the registry. USD oc mirror --v2 -c isc-enclave.yaml --from file://local-disk docker://registry.enc1.in The administrators of the OpenShift Container Platform cluster in enclave1 are now ready to install or upgrade that cluster. 5.10. How filtering works in the operator catalog oc-mirror plugin v2 selects the list of bundles for mirroring by processing the information in imageSetConfig . When oc-mirror plugin v2 selects bundles for mirroring, it does not infer Group Version Kind (GVK) or bundle dependencies, omitting them from the mirroring set. Instead, it strictly adheres to the user instructions. You must explicitly specify any required dependent packages and their versions. Bundle versions typically use semantic versioning standards (SemVer), and you can sort bundles within a channel by version. You can select buncles that fall within a specific range in the ImageSetConfig . This selection algorithm ensures consistent outcomes compared to oc-mirror plugin v1. However, it does not include upgrade graph details, such as replaces , skip , and skipRange . This approach differs from the OLM algorithm. It might mirror more bundles than necessary for upgrading a cluster because of potentially shorter upgrade paths between the minVersion and maxVersion . Table 5.1. Use the following table to see what bundle versions are included in different scenarios ImageSetConfig operator filtering Expected bundle versions Scenario 1 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 For each package in the catalog, 1 bundle, corresponding to the head version of the default channel for that package. Scenario 2 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 full: true All bundles of all channels of the specified catalog Scenario 3 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 packages: - name: compliance-operator One bundle, corresponding to the head version of the default channel for that package Scenario 4 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 full: true - packages: - name: elasticsearch-operator All bundles of all channels for the packages specified Scenario 5 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator minVersion: 5.6.0 All bundles in the default channel, from the minVersion , up to the channel head for that package that do not rely on the shortest path from upgrade the graph. Scenario 6 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator maxVersion: 6.0.0 All bundles in the default channel that are lower than the maxVersion for that package. Scenario 7 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator minVersion: 5.6.0 maxVersion: 6.0.0 All bundles in the default channel, between the minVersion and maxVersion for that package. The head of the channel is not included, even if multiple channels are included in the filtering. Scenario 8 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable The head bundle for the selected channel of that package. Scenario 9 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 full: true - packages: - name: elasticsearch-operator channels: - name: 'stable-v0' All bundles for the specified packages and channels. Scenario 10 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable - name: stable-5.5 The head bundle for each selected channel of that package. Scenario 11 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable minVersion: 5.6.0 Within the selected channel of that package, all versions starting with the minVersion up to the channel head. This scenario does not relyon the shortest path from the upgrade graph. Scenario 12 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable maxVersion: 6.0.0 Within the selected channel of that package, all versions up to the maxVersion (not relying on the shortest path from the upgrade graph). The head of the channel is not included, even if multiple channels are included in the filtering. Scenario 13 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable minVersion: 5.6.0 maxVersion: 6.0.0 Within the selected channel of that package, all versions between the minVersion and maxVersion , not relying on the shortest path from the upgrade graph. The head of the channel is not included, even if multiple channels are included in the filtering. Scenario 14 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: aws-load-balancer-operator bundles: - name: aws-load-balancer-operator.v1.1.0 - name: 3scale-operator bundles: - name: 3scale-operator.v0.10.0-mas Only the bundles specified for each package are included in the filtering. Scenario 15 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable minVersion: 5.6.0 maxVersion: 6.0.0 Do not use this scenario. filtering by channel and by package with a minVersion or maxVersion is not allowed. Scenario 16 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable minVersion: 5.6.0 maxVersion: 6.0.0 Do not use this scenario. You cannot filter using full:true and the minVersion or maxVersion . Scenario 17 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 full: true packages: - name: compliance-operator channels - name: stable minVersion: 5.6.0 maxVersion: 6.0.0 Do not use this scenario. You cannot filter using full:true and the minVersion or maxVersion . 5.11. ImageSet configuration parameters for oc-mirror plugin v2 The oc-mirror plugin v2 requires an image set configuration file that defines what images to mirror. The following table lists the available parameters for the ImageSetConfiguration resource. Note Using the minVersion and maxVersion properties to filter for a specific Operator version range can result in a multiple channel heads error. The error message states that there are multiple channel heads . This is because when the filter is applied, the update graph of the Operator is truncated. OLM requires that every Operator channel contains versions that form an update graph with exactly one end point, that is, the latest version of the Operator. When the filter range is applied, that graph can turn into two or more separate graphs or a graph that has more than one end point. To avoid this error, do not filter out the latest version of an Operator. If you still run into the error, depending on the Operator, either the maxVersion property must be increased or the minVersion property must be decreased. Because every Operator graph can be different, you might need to adjust these values until the error resolves. Table 5.2. ImageSetConfiguration parameters Parameter Description Values apiVersion The API version of the ImageSetConfiguration content. String Example: mirror.openshift.io/v2alpha1 archiveSize The maximum size, in GiB, of each archive file within the image set. Integer Example: 4 mirror The configuration of the image set. Object mirror.additionalImages The additional images configuration of the image set. Array of objects Example: additionalImages: - name: registry.redhat.io/ubi8/ubi:latest mirror.additionalImages.name The tag or digest of the image to mirror. String Example: registry.redhat.io/ubi8/ubi:latest mirror.blockedImages The full tag, digest, or pattern of images to block from mirroring. Array of strings Example: docker.io/library/alpine mirror.operators The Operators configuration of the image set. Array of objects Example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:4.16 packages: - name: elasticsearch-operator minVersion: '2.4.0' mirror.operators.catalog The Operator catalog to include in the image set. String Example: registry.redhat.io/redhat/redhat-operator-index:v4.15 mirror.operators.full When true , downloads the full catalog, Operator package, or Operator channel. Boolean The default value is false . mirror.operators.packages The Operator packages configuration. Array of objects Example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:4.16 packages: - name: elasticsearch-operator minVersion: '5.2.3-31' mirror.operators.packages.name The Operator package name to include in the image set. String Example: elasticsearch-operator mirror.operators.packages.channels Operator package channel configuration Object mirror.operators.packages.channels.name The Operator channel name, unique within a package, to include in the image set. String Eample: fast or stable-v4.15 mirror.operators.packages.channels.maxVersion The highest version of the Operator mirror across all channels in which it exists. String Example: 5.2.3-31 mirror.operators.packages.channels.minVersion The lowest version of the Operator to mirror across all channels in which it exists String Example: 5.2.3-31 mirror.operators.packages.maxVersion The highest version of the Operator to mirror across all channels in which it exists. String Example: 5.2.3-31 mirror.operators.packages.minVersion The lowest version of the Operator to mirror across all channels in which it exists. String Example: 5.2.3-31 mirror.operators.packages.bundles Selected bundles configuration Array of objects Example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:4.16 packages: - name: 3scale-operator bundles: - name: 3scale-operator.v0.10.0-mas mirror.operators.packages.bundles.name Name of the bundle selected for mirror (as it appears in the catalog). String Example : 3scale-operator.v0.10.0-mas mirror.operators.targetCatalog An alternative name and optional namespace hierarchy to mirror the referenced catalog as String Example: my-namespace/my-operator-catalog mirror.operators.targetCatalogSourceTemplate Path on disk for a template to use to complete catalogSource custom resource generated by oc-mirror plugin v2. String Example: /tmp/catalog-source_template.yaml Example of a template file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: discarded namespace: openshift-marketplace spec: image: discarded sourceType: grpc updateStrategy: registryPoll: interval: 30m0s mirror.operators.targetTag An alternative tag to append to the targetName or targetCatalog . String Example: v1 mirror.platform The platform configuration of the image set. Object mirror.platform.architectures The architecture of the platform release payload to mirror. Array of strings Example: architectures: - amd64 - arm64 - multi - ppc64le - s390x The default value is amd64 . The value multi ensures that the mirroring is supported for all available architectures, eliminating the need to specify individual architectures mirror.platform.channels The platform channel configuration of the image set. Array of objects Example: channels: - name: stable-4.12 - name: stable-4.16 mirror.platform.channels.full When true , sets the minVersion to the first release in the channel and the maxVersion to the last release in the channel. Boolean The default value is false mirror.platform.channels.name Name of the release channel String Example: stable-4.15 mirror.platform.channels.minVersion The minimum version of the referenced platform to be mirrored. String Example: 4.12.6 mirror.platform.channels.maxVersion The highest version of the referenced platform to be mirrored. String Example: 4.15.1 mirror.platform.channels.shortestPath Toggles shortest path mirroring or full range mirroring. Boolean The default value is false mirror.platform.channels.type Type of the platform to be mirrored String Example: ocp or okd . The default is ocp . mirror.platform.graph Indicates whether the OSUS graph is added to the image set and subsequently published to the mirror. Boolean The default value is false 5.11.1. Delete ImageSet Configuration parameters To use the oc-mirror plugin v2, you must have delete image set configuration file that defines which images to delete from the mirror registry. The following table lists the available parameters for the DeleteImageSetConfiguration resource. Table 5.3. DeleteImageSetConfiguration parameters Parameter Description Values apiVersion The API version for the DeleteImageSetConfiguration content. String Example: mirror.openshift.io/v2alpha1 delete The configuration of the image set to delete. Object delete.additionalImages The additional images configuration of the delete image set. Array of objects Example: additionalImages: - name: registry.redhat.io/ubi8/ubi:latest delete.additionalImages.name The tag or digest of the image to delete. String Example: registry.redhat.io/ubi8/ubi:latest delete.operators The Operators configuration of the delete image set. Array of objects Example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:{product-version} packages: - name: elasticsearch-operator minVersion: '2.4.0' delete.operators.catalog The Operator catalog to include in the delete image set. String Example: registry.redhat.io/redhat/redhat-operator-index:v4.15 delete.operators.full When true, deletes the full catalog, Operator package, or Operator channel. Boolean The default value is false delete.operators.packages Operator packages configuration Array of objects Example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:{product-version} packages: - name: elasticsearch-operator minVersion: '5.2.3-31' delete.operators.packages.name The Operator package name to include in the delete image set. String Example: elasticsearch-operator delete.operators.packages.channels Operator package channel configuration Object delete.operators.packages.channels.name The Operator channel name, unique within a package, to include in the delete image set. String Example: fast or stable-v4.15 delete.operators.packages.channels.maxVersion The highest version of the Operator to delete within the selected channel. String Example: 5.2.3-31 delete.operators.packages.channels.minVersion The lowest version of the Operator to delete within the selection in which it exists. String Example: 5.2.3-31 delete.operators.packages.maxVersion The highest version of the Operator to delete across all channels in which it exists. String Example: 5.2.3-31 delete.operators.packages.minVersion The lowest version of the Operator to delete across all channels in which it exists. String Example: 5.2.3-31 delete.operators.packages.bundles The selected bundles configuration Array of objects You cannot choose both channels and bundles for the same operator. Example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:{product-version} packages: - name: 3scale-operator bundles: - name: 3scale-operator.v0.10.0-mas delete.operators.packages.bundles.name Name of the bundle selected to delete (as it is displayed in the catalog) String Example : 3scale-operator.v0.10.0-mas delete.platform The platform configuration of the image set Object delete.platform.architectures The architecture of the platform release payload to delete. Array of strings Example: architectures: - amd64 - arm64 - multi - ppc64le - s390x The default value is amd64 delete.platform.channels The platform channel configuration of the image set. Array of objects Example: channels: - name: stable-4.12 - name: stable-4.16 delete.platform.channels.full When true , sets the minVersion to the first release in the channel and the maxVersion to the last release in the channel. Boolean The default value is false delete.platform.channels.name Name of the release channel String Example: stable-4.15 delete.platform.channels.minVersion The minimum version of the referenced platform to be deleted. String Example: 4.12.6 delete.platform.channels.maxVersion The highest version of the referenced platform to be deleted. String Example: 4.15.1 delete.platform.channels.shortestPath Toggles between deleting the shortest path and deleting the full range. Boolean The default value is false delete.platform.channels.type Type of the platform to be deleted String Example: ocp or okd The default is ocp delete.platform.graph Determines whether the OSUS graph is deleted as well on the mirror registry as well. Boolean The default value is false 5.12. Command reference for oc-mirror plugin v2 The following tables describe the oc mirror subcommands and flags for oc-mirror plugin v2: Table 5.4. Subcommands and flags for the oc-mirror plugin v2 Subcommand Description help Show help about any subcommand version Output the oc-mirror version delete Deletes images in remote registry and local cache. Table 5.5. oc mirror flags Flag Description --authfile Displays the string path of the authentication file. Default is USD{XDG_RUNTIME_DIR}/containers/auth.json . -c , --config <string> Specifies the path to an image set configuration file. --dest-tls-verify Requires HTTPS and verifies certificates when accessing the container registry or daemon. --dry-run Prints actions without mirroring images --from <string> Specifies the path to an image set archive that was generated by executing oc-mirror plugin v2 to load a target registry. -h , --help Displays help --loglevel Displays string log levels. Supported values include info, debug, trace, error. The default is info . -p , --port Determines the HTTP port used by oc-mirror plugin v2 local storage instance. The default is 55000 . --max-nested-paths <int> Specifies the maximum number of nested paths for destination registries that limit nested paths. The default is 0 . --secure-policy Default value is false . If you set a non-default value, the command enables signature verification, which is the secure policy for signature verification. --since Includes all new content since a specified date (format: yyyy-mm-dd ). When not provided, new content since mirroring is mirrored. --src-tls-verify Requires HTTPS and verifies certificates when accessing the container registry or daemon. --strict-archive Default value is false . If you set a value, the command generates archives that are strictly less than the archiveSize that was set in the imageSetConfig custom resource (CR). Mirroring exist in error if a file being archived exceeds archiveSize (GB). -v , --version Displays the version for oc-mirror plugin v2. --workspace Determines string oc-mirror plugin v2 workspace where resources and internal artifacts are generated. Configuring your cluster to use the resources generated by oc-mirror
[ "tar xvzf oc-mirror.tar.gz", "chmod +x oc-mirror", "sudo mv oc-mirror /usr/local/bin/.", "oc mirror --v2 --help", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v2alpha1 mirror: platform: channels: - name: stable-4.13 minVersion: 4.13.10 maxVersion: 4.13.10 graph: true operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: aws-load-balancer-operator - name: 3scale-operator - name: node-observability-operator additionalImages: - name: registry.redhat.io/ubi8/ubi:latest - name: registry.redhat.io/ubi9/ubi@sha256:20f695d2a91352d4eaa25107535126727b5945bff38ed36a3e59590f495046f0", "oc mirror -c isc.yaml --workspace file://<file_path> docker://<mirror_registry_url> --v2 1", "oc mirror -c isc.yaml file://<file_path> --v2 1", "oc mirror -c isc.yaml --from file://<file_path> docker://<mirror_registry_url> --v2 1", "oc apply -f <path_to_oc-mirror_workspace>/working-dir/cluster-resources", "oc get imagedigestmirrorset", "oc get imagetagmirrorset", "oc get catalogsource -n openshift-marketplace", "apiVersion: mirror.openshift.io/v2alpha1 kind: DeleteImageSetConfiguration delete: platform: channels: - name: stable-4.13 minVersion: 4.13.3 maxVersion: 4.13.3 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: aws-load-balancer-operator minVersion: 0.0.1 maxVersion: 0.0.1 additionalImages: - name: registry.redhat.io/ubi8/ubi@sha256:bce7e9f69fb7d4533447232478fd825811c760288f87a35699f9c8f030f2c1a6 - name: registry.redhat.io/ubi8/ubi-minimal@sha256:8bedbe742f140108897fb3532068e8316900d9814f399d676ac78b46e740e34e", "oc mirror delete --config delete-image-set-config.yaml --workspace file://<previously_mirrored_work_folder> --v2 --generate docker://<remote_registry>", "oc mirror delete --v2 --delete-yaml-file <previously_mirrored_work_folder>/delete/delete-images.yaml docker:/ <remote_registry>", "oc mirror -c <image_set_config_yaml> --from file://<oc_mirror_workspace_path> docker://<mirror_registry_url> --dry-run --v2", "oc mirror --config /tmp/isc_dryrun.yaml file://<oc_mirror_workspace_path> --dry-run --v2 [INFO] : :warning: --v2 flag identified, flow redirected to the oc-mirror v2 version. This is Tech Preview, it is still under development and it is not production ready. [INFO] : :wave: Hello, welcome to oc-mirror [INFO] : :gear: setting up the environment for you [INFO] : :twisted_rightwards_arrows: workflow mode: mirrorToDisk [INFO] : :sleuth_or_spy: going to discover the necessary images [INFO] : :mag: collecting release images [INFO] : :mag: collecting operator images [INFO] : :mag: collecting additional images [WARN] : :warning: 54/54 images necessary for mirroring are not available in the cache. [WARN] : List of missing images in : CLID-19/working-dir/dry-run/missing.txt. please re-run the mirror to disk process [INFO] : :page_facing_up: list of all images for mirroring in : CLID-19/working-dir/dry-run/mapping.txt [INFO] : mirror time : 9.641091076s [INFO] : :wave: Goodbye, thank you for using oc-mirror", "cd <oc_mirror_workspace_path>", "[ERROR] : [Worker] error mirroring image localhost:55000/openshift/graph-image:latest error: copying image 1/4 from manifest list: trying to reuse blob sha256:edab65b863aead24e3ed77cea194b6562143049a9307cd48f86b542db9eecb6e at destination: pinging container registry localhost:5000: Get \"https://localhost:5000/v2/\": http: server gave HTTP response to HTTPS client", "error mirroring image docker://registry.redhat.io/3scale-amp2/zync-rhel9@sha256:8bb6b31e108d67476cc62622f20ff8db34efae5d58014de9502336fcc479d86d (Operator bundles: [3scale-operator.v0.11.12] - Operators: [3scale-operator]) error: initializing source docker://localhost:55000/3scale-amp2/zync-rhel9:8bb6b31e108d67476cc62622f20ff8db34efae5d58014de9502336fcc479d86d: reading manifest 8bb6b31e108d67476cc62622f20ff8db34efae5d58014de9502336fcc479d86d in localhost:55000/3scale-amp2/zync-rhel9: manifest unknown error mirroring image docker://registry.redhat.io/3scale-amp2/3scale-rhel7-operator-metadata@sha256:de0a70d1263a6a596d28bf376158056631afd0b6159865008a7263a8e9bf0c7d error: skipping operator bundle docker://registry.redhat.io/3scale-amp2/3scale-rhel7-operator-metadata@sha256:de0a70d1263a6a596d28bf376158056631afd0b6159865008a7263a8e9bf0c7d because one of its related images failed to mirror error mirroring image docker://registry.redhat.io/3scale-amp2/system-rhel7@sha256:fe77272021867cc6b6d5d0c9bd06c99d4024ad53f1ab94ec0ab69d0fda74588e (Operator bundles: [3scale-operator.v0.11.12] - Operators: [3scale-operator]) error: initializing source docker://localhost:55000/3scale-amp2/system-rhel7:fe77272021867cc6b6d5d0c9bd06c99d4024ad53f1ab94ec0ab69d0fda74588e: reading manifest fe77272021867cc6b6d5d0c9bd06c99d4024ad53f1ab94ec0ab69d0fda74588e in localhost:55000/3scale-amp2/system-rhel7: manifest unknown", "[[registry]] location=\"registry.redhat.io\" [[registry.mirror]] location=\"<enterprise-registry.in>\" [[registry]] location=\"quay.io\" [[registry.mirror]] location=\"<enterprise-registry.in>\"", "oc mirror --v2 -c isc.yaml file://<file_path>/enterprise-content", "apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: architectures: - \"amd64\" channels: - name: stable-4.15 minVersion: 4.15.0 maxVersion: 4.15.3", "oc mirror --v2 -c isc.yaml --from file://<disconnected_environment_file_path>/enterprise-content docker://<enterprise_registry.in>/", "apiVersion: mirror.openshift.io/v2alpha1 kind: ImageSetConfiguration mirror: platform: architectures: - \"amd64\" channels: - name: stable-4.15 minVersion: 4.15.2 maxVersion: 4.15.2", "export UPDATE_URL_OVERRIDE=https://<osus.enterprise.in>/graph", "oc mirror --v2 -c isc-enclave.yaml file:///disk-enc1/", "oc mirror --v2 -c isc-enclave.yaml --from file://local-disk docker://registry.enc1.in", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 full: true", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 packages: - name: compliance-operator", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 full: true - packages: - name: elasticsearch-operator", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator minVersion: 5.6.0", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator maxVersion: 6.0.0", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator minVersion: 5.6.0 maxVersion: 6.0.0", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 full: true - packages: - name: elasticsearch-operator channels: - name: 'stable-v0'", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable - name: stable-5.5", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable minVersion: 5.6.0", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable maxVersion: 6.0.0", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable minVersion: 5.6.0 maxVersion: 6.0.0", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: aws-load-balancer-operator bundles: - name: aws-load-balancer-operator.v1.1.0 - name: 3scale-operator bundles: - name: 3scale-operator.v0.10.0-mas", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable minVersion: 5.6.0 maxVersion: 6.0.0", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 packages: - name: compliance-operator channels - name: stable minVersion: 5.6.0 maxVersion: 6.0.0", "mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 full: true packages: - name: compliance-operator channels - name: stable minVersion: 5.6.0 maxVersion: 6.0.0", "additionalImages: - name: registry.redhat.io/ubi8/ubi:latest", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:4.16 packages: - name: elasticsearch-operator minVersion: '2.4.0'", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:4.16 packages: - name: elasticsearch-operator minVersion: '5.2.3-31'", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:4.16 packages: - name: 3scale-operator bundles: - name: 3scale-operator.v0.10.0-mas", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: discarded namespace: openshift-marketplace spec: image: discarded sourceType: grpc updateStrategy: registryPoll: interval: 30m0s", "architectures: - amd64 - arm64 - multi - ppc64le - s390x", "channels: - name: stable-4.12 - name: stable-4.16", "additionalImages: - name: registry.redhat.io/ubi8/ubi:latest", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:{product-version} packages: - name: elasticsearch-operator minVersion: '2.4.0'", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:{product-version} packages: - name: elasticsearch-operator minVersion: '5.2.3-31'", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:{product-version} packages: - name: 3scale-operator bundles: - name: 3scale-operator.v0.10.0-mas", "architectures: - amd64 - arm64 - multi - ppc64le - s390x", "channels: - name: stable-4.12 - name: stable-4.16" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/disconnected_installation_mirroring/about-installing-oc-mirror-v2
8.94. iprutils
8.94. iprutils 8.94.1. RHBA-2014:1432 - iprutils bug fix and enhancement update Updated iprutils packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The iprutils packages provide utilities to manage and configure SCSI devices that are supported by the ipr SCSI storage device driver. This update also fixes the following bug: Note The iprutils package has been upgraded to upstream version 2.4.2, which provides a number of bug fixes and enhancements over the version. Specifically, this update provides support for the new vRAID Serial Attached SCSI (SAS) adapters. (BZ# 929292 ) This update also fixes the following bug: Bug Fix BZ# 1127825 Previously, information on "Read Intensive" disks did not display in the iprconfig menu. The underlying source code has been patched, and the disks information is now displayed correctly. The iprutils package has been upgraded to upstream version 2.4.2, which provides a number of bug fixes and enhancements over the version. Specifically, this update provides support for the new vRAID Serial Attached SCSI (SAS) adapters. (BZ#929292) Users of iprutils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/iprutils
Preface
Preface Use accelerators, such as NVIDIA GPUs, AMD GPUs, and Intel Gaudi AI accelerators, to optimize the performance of your end-to-end data science workflows.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_accelerators/pr01
9.2. Automatic NUMA Balancing
9.2. Automatic NUMA Balancing Automatic NUMA balancing improves the performance of applications running on NUMA hardware systems. It is enabled by default on Red Hat Enterprise Linux 7 systems. An application will generally perform best when the threads of its processes are accessing memory on the same NUMA node as the threads are scheduled. Automatic NUMA balancing moves tasks (which can be threads or processes) closer to the memory they are accessing. It also moves application data to memory closer to the tasks that reference it. This is all done automatically by the kernel when automatic NUMA balancing is active. Automatic NUMA balancing uses a number of algorithms and data structures, which are only active and allocated if automatic NUMA balancing is active on the system: Periodic NUMA unmapping of process memory NUMA hinting fault Migrate-on-Fault (MoF) - moves memory to where the program using it runs task_numa_placement - moves running programs closer to their memory 9.2.1. Configuring Automatic NUMA Balancing Automatic NUMA balancing is enabled by default in Red Hat Enterprise Linux 7, and will automatically activate when booted on hardware with NUMA properties. Automatic NUMA balancing is enabled when both of the following conditions are met: # numactl --hardware shows multiple nodes # cat /proc/sys/kernel/numa_balancing shows 1 Manual NUMA tuning of applications will override automatic NUMA balancing, disabling periodic unmapping of memory, NUMA faults, migration, and automatic NUMA placement of those applications. In some cases, system-wide manual NUMA tuning is preferred. To disable automatic NUMA balancing, use the following command: To enable automatic NUMA balancing, use the following command:
[ "echo 0 > /proc/sys/kernel/numa_balancing", "echo 1 > /proc/sys/kernel/numa_balancing" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-NUMA-Auto_NUMA_Balancing
Chapter 3. Analyzing your projects with the MTA extension
Chapter 3. Analyzing your projects with the MTA extension You can analyze your projects with the MTA extension by creating a run configuration and running an analysis. 3.1. MTA extension interface The interface of the Migration Toolkit for Applications (MTA) extension is designed to make it easier for you to find information and perform actions: In the left pane, you can see a directory tree named Analysis Results with a report icon at its top. You can click the icon to open the MTA report in your browser. Beneath the report icon are the other elements of the tree: the applications analyzed by MTA, the rulesets used, and the issues discovered by the analysis. In the upper right pane, you can configure an analysis. In the lower right pane, you can see the settings of the configuration, including source, target, and advanced options. You can view the progress of an analysis in this pane. When the analysis is completed, you can click the Open Report button to open the MTA report, which describes any issues you need to address before you migrate or modernize your application. For more information, see Reviewing the reports in the CLI Guide . 3.2. Configuring a run configuration MTA extension interface You can configure multiple run configurations to run against each project you import to VS Code. Prerequisites mta-cli executable installed. You can download the mta-cli executable from mta download . Procedure In Extensions view, click the Migration Toolkit for Applications icon ( ) on the Activity bar. Click the + (plus sign) to Migration Toolkit for Applications to add a run configuration. Complete the following configuration fields: Name : Enter a meaningful name for the analysis configuration or accept the default. cli : Enter the path to the cli executable. For example: USDHOME/mta-cli-7.1.1.GA-redhat/bin/mta-cli . Input : Set to the path of the project that you have open within your IDE by clicking Add and doing one of the following: Enter the input file or directory and press Enter. Click Open File Explorer and select the directory. Target : Select one or more target migration paths. Right-click the run configuration and select Run . When the analysis is completed, you can click the Open Report button to open the MTA report, which describes any issues you need to address before you migrate or modernize your application. For more information, see Reviewing the reports in the CLI Guide .
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/visual_studio_code_extension_guide/analyzing-projects-with-vs-code-extension_vsc-extension-guide
16.10.2. Installation
16.10.2. Installation To use virt-win-reg you must run the following:
[ "yum install /usr/bin/virt-win-reg" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virt-win-reg-install
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_infiniband_and_rdma_networks/proc_providing-feedback-on-red-hat-documentation_configuring-infiniband-and-rdma-networks
Chapter 17. Configuring a multi-site, fault-tolerant messaging system
Chapter 17. Configuring a multi-site, fault-tolerant messaging system Large-scale enterprise messaging systems commonly have discrete broker clusters located in geographically distributed data centers. In the event of a data center outage, system administrators might need to preserve existing messaging data and ensure that client applications can continue to produce and consume messages. You can use specific broker topologies and the Red Hat Ceph Storage software-defined storage platform to ensure continuity of your messaging system during a data center outage. This type of solution is called a multi-site, fault-tolerant architecture . The following sections explain how to protect your messaging system from data center outages. These sections provide information about: How Red Hat Ceph Storage clusters work Installing and configuring a Red Hat Ceph Storage cluster Adding backup brokers to take over from live brokers in the event of a data center outage Configuring your broker servers with the Ceph client role Configuring each broker to use the shared store high-availability (HA) policy, specifying where in the Ceph File System each broker stores its messaging data Configuring client applications to connect to new brokers in the event of a data center outage Restarting a data center after an outage Note Multi-site fault tolerance is not a replacement for high-availability (HA) broker redundancy within data centers. Broker redundancy based on live-backup groups provides automatic protection against single broker failures within single clusters. By contrast, multi-site fault tolerance protects against large-scale data center outages. Note To use Red Hat Ceph Storage to ensure continuity of your messaging system, you must configure your brokers to use the shared store high-availability (HA) policy. You cannot configure your brokers to use the replication HA policy. For more information about these policies, see Implementing High Availability . 17.1. How Red Hat Ceph Storage clusters work Red Hat Ceph Storage is a clustered object storage system. Red Hat Ceph Storage uses data sharding of objects and policy-based replication to guarantee data integrity and system availability. Red Hat Ceph Storage uses an algorithm called CRUSH (Controlled Replication Under Scalable Hashing) to determine how to store and retrieve data by automatically computing data storage locations. You configure Ceph items called CRUSH maps , which detail cluster topography and specify how data is replicated across storage clusters. CRUSH maps contain lists of Object Storage Devices (OSDs), a list of 'buckets' for aggregating the devices into a failure domain hierarchy, and rules that tell CRUSH how it should replicate data in a Ceph cluster's pools. By reflecting the underlying physical organization of the installation, CRUSH maps can model - and thereby address - potential sources of correlated device failures, such as physical proximity, shared power sources, and shared networks. By encoding this information into the cluster map, CRUSH can separate object replicas across different failure domains (for example, data centers) while still maintaining a pseudo-random distribution of data across the storage cluster. This helps to prevent data loss and enables the cluster to operate in a degraded state. Red Hat Ceph Storage clusters require a number of nodes (physical or virtual) to operate. Clusters must include the following types of nodes: Monitor nodes Each Monitor (MON) node runs the monitor daemon ( ceph-mon ), which maintains a master copy of the cluster map. The cluster map includes the cluster topology. A client connecting to the Ceph cluster retrieves the current copy of the cluster map from the Monitor, which enables the client to read from and write data to the cluster. Important A Red Hat Ceph Storage cluster can run with one Monitor node; however, to ensure high availability in a production cluster, Red Hat supports only deployments with at least three Monitor nodes. A minimum of three Monitor nodes means that in the event of the failure or unavailability of one Monitor, a quorum exists for the remaining Monitor nodes in the cluster to elect a new leader. Manager nodes Each Manager (MGR) node runs the Ceph Manager daemon ( ceph-mgr ), which is responsible for keeping track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load. Usually, Manager nodes are colocated (that is, on the same host machine) with Monitor nodes. Object Storage Device nodes Each Object Storage Device (OSD) node runs the Ceph OSD daemon ( ceph-osd ), which interacts with logical disks attached to the node. Ceph stores data on OSD nodes. Ceph can run with very few OSD nodes (the default is three), but production clusters realize better performance at modest scales, for example, with 50 OSDs in a storage cluster. Having multiple OSDs in a storage cluster enables system administrators to define isolated failure domains within a CRUSH map. Metadata Server nodes Each Metadata Server (MDS) node runs the MDS daemon ( ceph-mds ), which manages metadata related to files stored on the Ceph File System (CephFS). The MDS daemon also coordinates access to the shared cluster. Additional resources For more information about Red Hat Ceph Storage, see What is Red Hat Ceph Storage? 17.2. Installing Red Hat Ceph Storage AMQ Broker multi-site, fault-tolerant architectures use Red Hat Ceph Storage 3. By replicating data across data centers, a Red Hat Ceph Storage cluster effectively creates a shared store available to brokers in separate data centers. You configure your brokers to use the shared store high-availability (HA) policy and store messaging data in the Red Hat Ceph Storage cluster. Red Hat Ceph Storage clusters intended for production use should have a minimum of: Three Monitor (MON) nodes Three Manager (MGR) nodes Three Object Storage Device (OSD) nodes containing multiple OSD daemons Three Metadata Server (MDS) nodes Important You can run the OSD, MON, MGR, and MDS nodes on either the same or separate physical or virtual machines. However, to ensure fault tolerance within your Red Hat Ceph Storage cluster, it is good practice to distribute each of these types of nodes across distinct data centers. In particular, you must ensure that in the event of a single data center outage, your storage cluster still has a minimum of two available MON nodes. Therefore, if you have three MON nodes in you cluster, each of these nodes must run on separate host machines in separate data centers. Do not run two MON nodes in a single data center, because failure of this data center will leave your storage cluster with only one remaining MON node. In this situation, the storage cluster can no longer operate. The procedures linked-to from this section show you how to install a Red Hat Ceph Storage 3 cluster that includes MON, MGR, OSD, and MDS nodes. Prerequisites For information about preparing a Red Hat Ceph Storage installation, see: Prerequisites Requirements Checklist for Installing Red Hat Ceph Storage Procedure For procedures that show how to install a Red Hat Ceph 3 storage cluster that includes MON, MGR, OSD, and MDS nodes, see: Installing a Red Hat Ceph Storage Cluster Installing Metadata Servers 17.3. Configuring a Red Hat Ceph Storage cluster This example procedure shows how to configure your Red Hat Ceph storage cluster for fault tolerance. You create CRUSH buckets to aggregate your Object Storage Device (OSD) nodes into data centers that reflect your real-life, physical installation. In addition, you create a rule that tells CRUSH how to replicate data in your storage pools. These steps update the default CRUSH map that was created by your Ceph installation. Prerequisites You have already installed a Red Hat Ceph Storage cluster. For more information, see Installing Red Hat Ceph Storage . You should understand how Red Hat Ceph Storage uses Placement Groups (PGs) to organize large numbers of data objects in a pool, and how to calculate the number of PGs to use in your pool. For more information, see Placement Groups (PGs) . You should understand how to set the number of object replicas in a pool. For more information, Set the Number of Object Replicas . Procedure Create CRUSH buckets to organize your OSD nodes. Buckets are lists of OSDs, based on physical locations such as data centers. In Ceph, these physical locations are known as failure domains . ceph osd crush add-bucket dc1 datacenter ceph osd crush add-bucket dc2 datacenter Move the host machines for your OSD nodes to the data center CRUSH buckets that you created. Replace host names host1 - host4 with the names of your host machines. ceph osd crush move host1 datacenter=dc1 ceph osd crush move host2 datacenter=dc1 ceph osd crush move host3 datacenter=dc2 ceph osd crush move host4 datacenter=dc2 Ensure that the CRUSH buckets you created are part of the default CRUSH tree. ceph osd crush move dc1 root=default ceph osd crush move dc2 root=default Create a rule to map storage object replicas across your data centers. This helps to prevent data loss and enables your cluster to stay running in the event of a single data center outage. The command to create a rule uses the following syntax: ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> . An example is shown below. ceph osd crush rule create-replicated multi-dc default datacenter hdd Note In the preceding command, if your storage cluster uses solid-state drives (SSD), specify ssd instead of hdd (hard disk drives). Configure your Ceph data and metadata pools to use the rule that you created. Initially, this might cause data to be backfilled to the storage destinations determined by the CRUSH algorithm. ceph osd pool set cephfs_data crush_rule multi-dc ceph osd pool set cephfs_metadata crush_rule multi-dc Specify the numbers of Placement Groups (PGs) and Placement Groups for Placement (PGPs) for your metadata and data pools. The PGP value should be equal to the PG value. ceph osd pool set cephfs_metadata pg_num 128 ceph osd pool set cephfs_metadata pgp_num 128 ceph osd pool set cephfs_data pg_num 128 ceph osd pool set cephfs_data pgp_num 128 Specify the numbers of replicas to be used by your data and metadata pools. ceph osd pool set cephfs_data min_size 1 ceph osd pool set cephfs_metadata min_size 1 ceph osd pool set cephfs_data size 2 ceph osd pool set cephfs_metadata size 2 The following figure shows the Red Hat Ceph Storage cluster created by the preceding example procedure. The storage cluster has OSDs organized into CRUSH buckets corresponding to data centers. The following figure shows a possible layout of the first data center, including your broker servers. Specifically, the data center hosts: The servers for two live-backup broker pairs The OSD nodes that you assigned to the first data center in the preceding procedure Single Metadata Server, Monitor and Manager nodes. The Monitor and Manager nodes are usually co-located on the same machine. Important You can run the OSD, MON, MGR, and MDS nodes on either the same or separate physical or virtual machines. However, to ensure fault tolerance within your Red Hat Ceph Storage cluster, it is good practice to distribute each of these types of nodes across distinct data centers. In particular, you must ensure that in the event of a single data center outage, you storage cluster still has a minimum of two available MON nodes. Therefore, if you have three MON nodes in you cluster, each of these nodes must run on separate host machines in separate data centers. The following figure shows a complete example topology. To ensure fault tolerance in your storage cluster, the MON, MGR, and MDS nodes are distributed across three separate data centers. Note Locating the host machines for certain OSD nodes in the same data center as your broker servers does not mean that you store messaging data on those specific OSD nodes. You configure the brokers to store messaging data in a specified directory in the Ceph File System. The Metadata Server nodes in your cluster then determine how to distribute the stored data across all available OSDs in your data centers and handle replication of this data across data centers. the sections that follow show how to configure brokers to store messaging data on the Ceph File System. The figure below illustrates replication of data between the two data centers that have broker servers. Additional resources For more information about: Administrating CRUSH for your Red Hat Ceph Storage cluster, see CRUSH Administration . The full set of attributes that you can set on a storage pool, see Pool Values . 17.4. Mounting the Ceph File System on your broker servers Before you can configure brokers in your messaging system to store messaging data in your Red Hat Ceph Storage cluster, you first need to mount a Ceph File System (CephFS). The procedure linked-to from this section shows you how to mount the CephFS on your broker servers. Prerequisites You have: Installed and configured a Red Hat Ceph Storage cluster. For more information, see Installing Red Hat Ceph Storage and Configuring a Red Hat Ceph Storage cluster . Installed and configured three or more Ceph Metadata Server daemons ( ceph-mds ). For more information, see Installing Metadata Servers and Configuring Metadata Server Daemons . Created the Ceph File System from a Monitor node. For more information, see Creating the Ceph File System . Created a Ceph File System client user with a key that your broker servers can use for authorized access. For more information, see Creating Ceph File System Client Users . Procedure For instructions on mounting the Ceph File System on your broker servers, see Mounting the Ceph File System as a kernel client . 17.5. Configuring brokers in a multi-site, fault-tolerant messaging system To configure your brokers as part of a multi-site, fault-tolerant messaging system, you need to: Add idle backup brokers to take over from live brokers in the event of a data center failure Configure all broker servers with the Ceph client role Configure each broker to use the shared store high-availability (HA) policy, specifying where in the Ceph File System the broker stores its messaging data 17.5.1. Adding backup brokers Within each of your data centers, you need to add idle backup brokers that can take over from live master-slave broker groups that shut down in the event of a data center outage. You should replicate the configuration of live master brokers in your idle backup brokers. You also need to configure your backup brokers to accept client connections in the same way as your existing brokers. In a later procedure, you see how to configure an idle backup broker to join an existing master-slave broker group. You must locate the idle backup broker in a separate data center to that of the live master-slave broker group. It is also recommended that you manually start the idle backup broker only in the event of a data center failure. The following figure shows an example topology. Additional resources To learn how to create additional broker instances, see Creating a standalone broker . For information about configuring broker network connections, see Network Connections: Acceptors and Connectors . 17.5.2. Configuring brokers as Ceph clients When you have added the backup brokers that you need for a fault-tolerant system, you must configure all of the broker servers with the Ceph client role. The client role enable brokers to store data in your Red Hat Ceph Storage cluster. To learn how to configure Ceph clients, see Installing the Ceph Client Role . 17.5.3. Configuring shared store high availability The Red Hat Ceph Storage cluster effectively creates a shared store that is available to brokers in different data centers. To ensure that messages remain available to broker clients in the event of a failure, you configure each broker in your live-backup group to use: The shared store high availability (HA) policy The same journal, paging, and large message directories in the Ceph File System The following procedure shows how to configure the shared store HA policy on the master, slave, and idle backup brokers of your live-backup group. Procedure Edit the broker.xml configuration file of each broker in the live-backup group. Configure each broker to use the same paging, bindings, journal, and large message directories in the Ceph File System. # Master Broker - DC1 <paging-directory>mnt/cephfs/broker1/paging</paging-directory> <bindings-directory>/mnt/cephfs/data/broker1/bindings</bindings-directory> <journal-directory>/mnt/cephfs/data/broker1/journal</journal-directory> <large-messages-directory>mnt/cephfs/data/broker1/large-messages</large-messages-directory> # Slave Broker - DC1 <paging-directory>mnt/cephfs/broker1/paging</paging-directory> <bindings-directory>/mnt/cephfs/data/broker1/bindings</bindings-directory> <journal-directory>/mnt/cephfs/data/broker1/journal</journal-directory> <large-messages-directory>mnt/cephfs/data/broker1/large-messages</large-messages-directory> # Backup Broker (Idle) - DC2 <paging-directory>mnt/cephfs/broker1/paging</paging-directory> <bindings-directory>/mnt/cephfs/data/broker1/bindings</bindings-directory> <journal-directory>/mnt/cephfs/data/broker1/journal</journal-directory> <large-messages-directory>mnt/cephfs/data/broker1/large-messages</large-messages-directory> Configure the backup broker as a master within it's HA policy, as shown below. This configuration setting ensures that the backup broker immediately becomes the master when you manually start it. Because the broker is an idle backup, the failover-on-shutdown parameter that you can specify for an active master broker does not apply in this case. <configuration> <core> ... <ha-policy> <shared-store> <master> </master> </shared-store> </ha-policy> ... </core> </configuration> Additional resources For more information about configuring the shared store high availability policy for live-backup broker groups, see Configuring shared store high availability . 17.6. Configuring clients in a multi-site, fault-tolerant messaging system An internal client application is one that is running on a machine located in the same data center as the broker server. The following figure shows this topology. An external client application is one running on a machine located outside the broker data center. The following figure shows this topology. The following sub-sections describe show examples of configuring your internal and external client applications to connect to a backup broker in another data center in the event of a data center outage. 17.6.1. Configuring internal clients If you experience a data center outage, internal client applications will shut down along with your brokers. To mitigate this situation, you must have another instance of the client application available in a separate data center. In the event of a data center outage, you manually start your backup client to connect to a backup broker that you have also manually started. To enable the backup client to connect to a backup broker, you need to configure the client connection similarly to that of the client in your primary data center. Example A basic connection configuration for an AMQ Core Protocol JMS client to a master-slave broker group is shown below. In this example, host1 and host2 are the host servers for the master and slave brokers. To configure a backup client to connect to a backup broker in the event of a data center outage, use a similar connection configuration, but specify only the host name of your backup broker server. In this example, the backup broker server is host3. Additional resources For more information about configuring broker and client network connections, see: Network Connections: Acceptors and Connectors . Configuring a Connection from the Client Side . 17.6.2. Configuring external clients To enable an external broker client to continue producing or consuming messaging data in the event of a data center outage, you must configure the client to fail over to a broker in another data center. In the case of a multi-site, fault-tolerant system, you configure the client to fail over to the backup broker that you manually start in the event of an outage. Examples Shown below are examples of configuring the AMQ Core Protocol JMS and AMQP JMS clients to fail over to a backup broker in the event that the primary master-slave group is unavailable. In these examples, host1 and host2 are the host servers for the primary master and slave brokers, while host3 is the host server for the backup broker that you manually start in the event of a data center outage. To configure an AMQ Core Protocol JMS client, include the backup broker on the ordered list of brokers that the client attempts to connect to. To configure an AMQP JMS client, include the backup broker in the failover URI that you configure on the client. Additional resources For more information about configuring failover on: The AMQ Core Protocol JMS client, see Reconnect and failover . The AMQP JMS client, see Failover options . Other supported clients, consult the client-specific documentation in the AMQ Clients section of Product Documentation for Red Hat AMQ 7.8 . 17.7. Verifying storage cluster health during a data center outage When you have configured your Red Hat Ceph Storage cluster for fault tolerance, the cluster continues to run in a degraded state without losing data, even when one of your data centers fails. This procedure shows how to verify the status of your cluster while it runs in a degraded state. Procedure To verify the status of your Ceph storage cluster, use the health or status commands: To watch the ongoing events of the cluster on the command line, open a new terminal. Then, enter: When you run any of the preceding commands, you see output indicating that the storage cluster is still running, but in a degraded state. Specifically, you should see a warning that resembles the following: health: HEALTH_WARN 2 osds down Degraded data redundancy: 42/84 objects degraded (50.0%), 16 pgs unclean, 16 pgs degraded Additional resources For more information about monitoring the health of your Red Hat Ceph Storage cluster, see Monitoring . 17.8. Maintaining messaging continuity during a data center outage The following procedure shows you how to keep brokers and associated messaging data available to clients during a data center outage. Specifically, when a data center fails, you need to: Manually start any idle backup brokers that you created to take over from brokers in your failed data center. Connect internal or external clients to the new active brokers. Prerequisites You must have: Installed and configured a Red Hat Ceph Storage cluster. For more information, see Installing Red Hat Ceph Storage and Configuring a Red Hat Ceph Storage cluster . Mounted the Ceph File System. For more information, see Mounting the Ceph File System on your broker servers . Added idle backup brokers to take over from live brokers in the event of a data center failure. For more information, see Adding backup brokers . Configured your broker servers with the Ceph client role. For more information, see Configuring brokers as Ceph clients . Configured each broker to use the shared store high availability (HA) policy, specifying where in the Ceph File System each broker stores its messaging data . For more information, see Configuring shared store high availability . Configured your clients to connect to backup brokers in the event of a data center outage. For more information, see Configuring clients in a multi-site, fault-tolerant messaging system . Procedure For each master-slave broker pair in the failed data center, manually start the idle backup broker that you added. Reestablish client connections. If you were using an internal client in the failed data center, manually start the backup client that you created. As described in Configuring clients in a multi-site, fault-tolerant messaging system , you must configure the client to connect to the backup broker that you manually started. The following figure shows the new topology. If you have an external client, manually connect the external client to the new active broker or observe that the clients automatically fails over to the new active broker, based on its configuration. For more information, see Configuring external clients . The following figure shows the new topology. 17.9. Restarting a previously failed data center When a previously failed data center is back online, follow these steps to restore the original state of your messaging system: Restart the servers that host the nodes of your Red Hat Ceph Storage cluster Restart the brokers in your messaging system Re-establish connections from your client applications to your restored brokers The following sub-sections show to perform these steps. 17.9.1. Restarting storage cluster servers When you restart Monitor, Metadata Server, Manager, and Object Storage Device (OSD) nodes in a previously failed data center, your Red Hat Ceph Storage cluster self-heals to restore full data redundancy. During this process, Red Hat Ceph Storage automatically backfills data to the restored OSD nodes, as needed. To verify that your storage cluster is automatically self-healing and restoring full data redundancy, use the commands previously shown in Verifying storage cluster health during a data center outage . When you re-execute these commands, you see that the percentage degradation indicated by the HEALTH_WARN message starts to improve until it returns to 100%. 17.9.2. Restarting broker servers The following procedure shows how to restart your broker servers when your storage cluster is no longer operating in a degraded state. Procedure Stop any client applications connected to backup brokers that you manually started when the data center outage occurred. Stop the backup brokers that you manually started. On Linux: BROKER_INSTANCE_DIR/bin/artemis stop On Windows: BROKER_INSTANCE_DIR\bin\artemis-service.exe stop In your previously failed data center, restart the original master and slave brokers. On Linux: BROKER_INSTANCE_DIR/bin/artemis run On Windows: BROKER_INSTANCE_DIR\bin\artemis-service.exe start The original master broker automatically resumes its role as master when you restart it. 17.9.3. Reestablishing client connections When you have restarted your broker servers, reconnect your client applications to those brokers. The following subsections describe how to reconnect both internal and external client applications. 17.9.3.1. Reconnecting internal clients Internal clients are those running in the same, previously failed data center as the restored brokers. To reconnect internal clients, restart them. Each client application reconnects to the restored master broker that is specified in its connection configuration. For more information about configuring broker and client network connections, see: Network Connections: Acceptors and Connectors Configuring a Connection from the Client Side 17.9.3.2. Reconnecting external clients External clients are those running outside the data center that previously failed. Based on your client type, and the information in Configuring external broker clients , you either configured the client to automatically fail over to a backup broker, or you manually established this connection. When you restore your previously failed data center, you reestablish a connection from your client to the restored master broker in a similar way, as described below. If you configured your external client to automatically fail over to a backup broker, the client automatically fails back to the original master broker when you shut down the backup broker and restart the original master broker. If you manually connected the external client to a backup broker when a data center outage occurred, you must manually reconnect the client to the original master broker that you restart.
[ "ceph osd crush add-bucket dc1 datacenter ceph osd crush add-bucket dc2 datacenter", "ceph osd crush move host1 datacenter=dc1 ceph osd crush move host2 datacenter=dc1 ceph osd crush move host3 datacenter=dc2 ceph osd crush move host4 datacenter=dc2", "ceph osd crush move dc1 root=default ceph osd crush move dc2 root=default", "ceph osd crush rule create-replicated multi-dc default datacenter hdd", "ceph osd pool set cephfs_data crush_rule multi-dc ceph osd pool set cephfs_metadata crush_rule multi-dc", "ceph osd pool set cephfs_metadata pg_num 128 ceph osd pool set cephfs_metadata pgp_num 128 ceph osd pool set cephfs_data pg_num 128 ceph osd pool set cephfs_data pgp_num 128", "ceph osd pool set cephfs_data min_size 1 ceph osd pool set cephfs_metadata min_size 1 ceph osd pool set cephfs_data size 2 ceph osd pool set cephfs_metadata size 2", "Master Broker - DC1 <paging-directory>mnt/cephfs/broker1/paging</paging-directory> <bindings-directory>/mnt/cephfs/data/broker1/bindings</bindings-directory> <journal-directory>/mnt/cephfs/data/broker1/journal</journal-directory> <large-messages-directory>mnt/cephfs/data/broker1/large-messages</large-messages-directory> Slave Broker - DC1 <paging-directory>mnt/cephfs/broker1/paging</paging-directory> <bindings-directory>/mnt/cephfs/data/broker1/bindings</bindings-directory> <journal-directory>/mnt/cephfs/data/broker1/journal</journal-directory> <large-messages-directory>mnt/cephfs/data/broker1/large-messages</large-messages-directory> Backup Broker (Idle) - DC2 <paging-directory>mnt/cephfs/broker1/paging</paging-directory> <bindings-directory>/mnt/cephfs/data/broker1/bindings</bindings-directory> <journal-directory>/mnt/cephfs/data/broker1/journal</journal-directory> <large-messages-directory>mnt/cephfs/data/broker1/large-messages</large-messages-directory>", "<configuration> <core> <ha-policy> <shared-store> <master> </master> </shared-store> </ha-policy> </core> </configuration>", "<ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(\"(tcp://host1:port,tcp://host2:port)?ha=true&retryInterval=100&retryIntervalMultiplier=1.0&reconnectAttempts=-1\");", "<ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(\"(tcp://host3:port)?ha=true&retryInterval=100&retryIntervalMultiplier=1.0&reconnectAttempts=-1\");", "<ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(\"(tcp://host1:port,tcp://host2:port,tcp://host3:port)?ha=true&retryInterval=100&retryIntervalMultiplier=1.0&reconnectAttempts=-1\");", "failover:(amqp://host1:port,amqp://host2:port,amqp://host3:port)?jms.clientID=foo&failover.maxReconnectAttempts=20", "ceph health ceph status", "ceph -w", "health: HEALTH_WARN 2 osds down Degraded data redundancy: 42/84 objects degraded (50.0%), 16 pgs unclean, 16 pgs degraded", "BROKER_INSTANCE_DIR/bin/artemis stop", "BROKER_INSTANCE_DIR\\bin\\artemis-service.exe stop", "BROKER_INSTANCE_DIR/bin/artemis run", "BROKER_INSTANCE_DIR\\bin\\artemis-service.exe start" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/configuring_amq_broker/configuring-fault-tolerant-system-configuring